00:00:00.001 Started by upstream project "autotest-per-patch" build number 132755 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.006 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.007 The recommended git tool is: git 00:00:00.007 using credential 00000000-0000-0000-0000-000000000002 00:00:00.009 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.023 Fetching changes from the remote Git repository 00:00:00.027 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.042 Using shallow fetch with depth 1 00:00:00.042 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.042 > git --version # timeout=10 00:00:00.058 > git --version # 'git version 2.39.2' 00:00:00.058 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.073 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.073 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.770 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.786 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.801 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.801 > git config core.sparsecheckout # timeout=10 00:00:05.817 > git read-tree -mu HEAD # timeout=10 00:00:05.834 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.864 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.864 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.975 [Pipeline] Start of Pipeline 00:00:05.989 [Pipeline] library 00:00:05.991 Loading library shm_lib@master 00:00:05.991 Library shm_lib@master is cached. Copying from home. 00:00:06.006 [Pipeline] node 00:11:53.467 Still waiting to schedule task 00:11:53.468 Waiting for next available executor on ‘vagrant-vm-host’ 00:19:29.842 Running on VM-host-WFP1 in /var/jenkins/workspace/raid-vg-autotest 00:19:29.845 [Pipeline] { 00:19:29.857 [Pipeline] catchError 00:19:29.859 [Pipeline] { 00:19:29.876 [Pipeline] wrap 00:19:29.887 [Pipeline] { 00:19:29.896 [Pipeline] stage 00:19:29.898 [Pipeline] { (Prologue) 00:19:29.920 [Pipeline] echo 00:19:29.922 Node: VM-host-WFP1 00:19:29.929 [Pipeline] cleanWs 00:19:29.939 [WS-CLEANUP] Deleting project workspace... 00:19:29.939 [WS-CLEANUP] Deferred wipeout is used... 00:19:29.945 [WS-CLEANUP] done 00:19:30.151 [Pipeline] setCustomBuildProperty 00:19:30.256 [Pipeline] httpRequest 00:19:30.709 [Pipeline] echo 00:19:30.711 Sorcerer 10.211.164.101 is alive 00:19:30.722 [Pipeline] retry 00:19:30.725 [Pipeline] { 00:19:30.739 [Pipeline] httpRequest 00:19:30.764 HttpMethod: GET 00:19:30.765 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:19:30.765 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:19:30.767 Response Code: HTTP/1.1 200 OK 00:19:30.768 Success: Status code 200 is in the accepted range: 200,404 00:19:30.768 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:19:31.034 [Pipeline] } 00:19:31.053 [Pipeline] // retry 00:19:31.062 [Pipeline] sh 00:19:31.384 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:19:31.398 [Pipeline] httpRequest 00:19:31.776 [Pipeline] echo 00:19:31.778 Sorcerer 10.211.164.101 is alive 00:19:31.788 [Pipeline] retry 00:19:31.790 [Pipeline] { 00:19:31.807 [Pipeline] httpRequest 00:19:31.812 HttpMethod: GET 00:19:31.812 URL: http://10.211.164.101/packages/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz 00:19:31.813 Sending request to url: http://10.211.164.101/packages/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz 00:19:31.813 Response Code: HTTP/1.1 200 OK 00:19:31.814 Success: Status code 200 is in the accepted range: 200,404 00:19:31.814 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz 00:19:34.331 [Pipeline] } 00:19:34.349 [Pipeline] // retry 00:19:34.357 [Pipeline] sh 00:19:34.641 + tar --no-same-owner -xf spdk_b6a18b192deed44d4966a73e82862012fc8e96b4.tar.gz 00:19:37.185 [Pipeline] sh 00:19:37.467 + git -C spdk log --oneline -n5 00:19:37.467 b6a18b192 nvme/rdma: Don't limit max_sge if UMR is used 00:19:37.467 1148849d6 nvme/rdma: Register UMR per IO request 00:19:37.467 0787c2b4e accel/mlx5: Support mkey registration 00:19:37.467 0ea9ac02f accel/mlx5: Create pool of UMRs 00:19:37.467 60adca7e1 lib/mlx5: API to configure UMR 00:19:37.486 [Pipeline] writeFile 00:19:37.501 [Pipeline] sh 00:19:37.783 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:19:37.796 [Pipeline] sh 00:19:38.079 + cat autorun-spdk.conf 00:19:38.079 SPDK_RUN_FUNCTIONAL_TEST=1 00:19:38.079 SPDK_RUN_ASAN=1 00:19:38.079 SPDK_RUN_UBSAN=1 00:19:38.079 SPDK_TEST_RAID=1 00:19:38.079 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:19:38.086 RUN_NIGHTLY=0 00:19:38.088 [Pipeline] } 00:19:38.101 [Pipeline] // stage 00:19:38.114 [Pipeline] stage 00:19:38.116 [Pipeline] { (Run VM) 00:19:38.127 [Pipeline] sh 00:19:38.409 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:19:38.409 + echo 'Start stage prepare_nvme.sh' 00:19:38.409 Start stage prepare_nvme.sh 00:19:38.409 + [[ -n 4 ]] 00:19:38.409 + disk_prefix=ex4 00:19:38.409 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:19:38.409 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:19:38.409 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:19:38.409 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:19:38.409 ++ SPDK_RUN_ASAN=1 00:19:38.409 ++ SPDK_RUN_UBSAN=1 00:19:38.409 ++ SPDK_TEST_RAID=1 00:19:38.409 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:19:38.409 ++ RUN_NIGHTLY=0 00:19:38.409 + cd /var/jenkins/workspace/raid-vg-autotest 00:19:38.409 + nvme_files=() 00:19:38.409 + declare -A nvme_files 00:19:38.409 + backend_dir=/var/lib/libvirt/images/backends 00:19:38.409 + nvme_files['nvme.img']=5G 00:19:38.409 + nvme_files['nvme-cmb.img']=5G 00:19:38.409 + nvme_files['nvme-multi0.img']=4G 00:19:38.409 + nvme_files['nvme-multi1.img']=4G 00:19:38.409 + nvme_files['nvme-multi2.img']=4G 00:19:38.409 + nvme_files['nvme-openstack.img']=8G 00:19:38.409 + nvme_files['nvme-zns.img']=5G 00:19:38.409 + (( SPDK_TEST_NVME_PMR == 1 )) 00:19:38.409 + (( SPDK_TEST_FTL == 1 )) 00:19:38.409 + (( SPDK_TEST_NVME_FDP == 1 )) 00:19:38.409 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:19:38.409 + for nvme in "${!nvme_files[@]}" 00:19:38.409 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:19:38.409 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:19:38.409 + for nvme in "${!nvme_files[@]}" 00:19:38.409 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:19:38.409 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:19:38.409 + for nvme in "${!nvme_files[@]}" 00:19:38.409 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:19:38.409 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:19:38.409 + for nvme in "${!nvme_files[@]}" 00:19:38.409 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:19:38.409 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:19:38.409 + for nvme in "${!nvme_files[@]}" 00:19:38.409 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:19:38.409 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:19:38.409 + for nvme in "${!nvme_files[@]}" 00:19:38.409 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:19:38.669 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:19:38.669 + for nvme in "${!nvme_files[@]}" 00:19:38.669 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:19:38.669 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:19:38.669 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:19:38.669 + echo 'End stage prepare_nvme.sh' 00:19:38.669 End stage prepare_nvme.sh 00:19:38.682 [Pipeline] sh 00:19:38.966 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:19:38.966 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:19:38.966 00:19:38.966 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:19:38.966 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:19:38.966 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:19:38.966 HELP=0 00:19:38.966 DRY_RUN=0 00:19:38.966 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:19:38.966 NVME_DISKS_TYPE=nvme,nvme, 00:19:38.966 NVME_AUTO_CREATE=0 00:19:38.966 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:19:38.966 NVME_CMB=,, 00:19:38.966 NVME_PMR=,, 00:19:38.966 NVME_ZNS=,, 00:19:38.966 NVME_MS=,, 00:19:38.966 NVME_FDP=,, 00:19:38.966 SPDK_VAGRANT_DISTRO=fedora39 00:19:38.966 SPDK_VAGRANT_VMCPU=10 00:19:38.966 SPDK_VAGRANT_VMRAM=12288 00:19:38.966 SPDK_VAGRANT_PROVIDER=libvirt 00:19:38.966 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:19:38.966 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:19:38.966 SPDK_OPENSTACK_NETWORK=0 00:19:38.966 VAGRANT_PACKAGE_BOX=0 00:19:38.966 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:19:38.966 FORCE_DISTRO=true 00:19:38.966 VAGRANT_BOX_VERSION= 00:19:38.966 EXTRA_VAGRANTFILES= 00:19:38.966 NIC_MODEL=e1000 00:19:38.966 00:19:38.966 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:19:38.966 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:19:41.502 Bringing machine 'default' up with 'libvirt' provider... 00:19:42.882 ==> default: Creating image (snapshot of base box volume). 00:19:42.882 ==> default: Creating domain with the following settings... 00:19:42.882 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733509092_06f6f9dec80b451a77a5 00:19:42.882 ==> default: -- Domain type: kvm 00:19:42.882 ==> default: -- Cpus: 10 00:19:42.882 ==> default: -- Feature: acpi 00:19:42.882 ==> default: -- Feature: apic 00:19:42.882 ==> default: -- Feature: pae 00:19:42.882 ==> default: -- Memory: 12288M 00:19:42.882 ==> default: -- Memory Backing: hugepages: 00:19:42.882 ==> default: -- Management MAC: 00:19:42.882 ==> default: -- Loader: 00:19:42.882 ==> default: -- Nvram: 00:19:42.882 ==> default: -- Base box: spdk/fedora39 00:19:42.882 ==> default: -- Storage pool: default 00:19:42.882 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733509092_06f6f9dec80b451a77a5.img (20G) 00:19:42.882 ==> default: -- Volume Cache: default 00:19:42.882 ==> default: -- Kernel: 00:19:42.882 ==> default: -- Initrd: 00:19:42.882 ==> default: -- Graphics Type: vnc 00:19:42.882 ==> default: -- Graphics Port: -1 00:19:42.882 ==> default: -- Graphics IP: 127.0.0.1 00:19:42.882 ==> default: -- Graphics Password: Not defined 00:19:42.882 ==> default: -- Video Type: cirrus 00:19:42.882 ==> default: -- Video VRAM: 9216 00:19:42.882 ==> default: -- Sound Type: 00:19:42.882 ==> default: -- Keymap: en-us 00:19:42.882 ==> default: -- TPM Path: 00:19:42.882 ==> default: -- INPUT: type=mouse, bus=ps2 00:19:42.882 ==> default: -- Command line args: 00:19:42.882 ==> default: -> value=-device, 00:19:42.882 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:19:42.882 ==> default: -> value=-drive, 00:19:42.882 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:19:42.882 ==> default: -> value=-device, 00:19:42.882 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:19:42.882 ==> default: -> value=-device, 00:19:42.882 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:19:42.882 ==> default: -> value=-drive, 00:19:42.882 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:19:42.882 ==> default: -> value=-device, 00:19:42.882 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:19:42.882 ==> default: -> value=-drive, 00:19:42.882 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:19:42.882 ==> default: -> value=-device, 00:19:42.882 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:19:42.882 ==> default: -> value=-drive, 00:19:42.882 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:19:42.882 ==> default: -> value=-device, 00:19:42.882 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:19:43.142 ==> default: Creating shared folders metadata... 00:19:43.142 ==> default: Starting domain. 00:19:45.102 ==> default: Waiting for domain to get an IP address... 00:20:07.029 ==> default: Waiting for SSH to become available... 00:20:07.029 ==> default: Configuring and enabling network interfaces... 00:20:10.316 default: SSH address: 192.168.121.88:22 00:20:10.316 default: SSH username: vagrant 00:20:10.316 default: SSH auth method: private key 00:20:12.851 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:20:22.834 ==> default: Mounting SSHFS shared folder... 00:20:23.767 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:20:23.767 ==> default: Checking Mount.. 00:20:25.666 ==> default: Folder Successfully Mounted! 00:20:25.667 ==> default: Running provisioner: file... 00:20:26.600 default: ~/.gitconfig => .gitconfig 00:20:27.167 00:20:27.167 SUCCESS! 00:20:27.167 00:20:27.167 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:20:27.167 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:20:27.167 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:20:27.167 00:20:27.175 [Pipeline] } 00:20:27.189 [Pipeline] // stage 00:20:27.198 [Pipeline] dir 00:20:27.199 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:20:27.201 [Pipeline] { 00:20:27.212 [Pipeline] catchError 00:20:27.214 [Pipeline] { 00:20:27.225 [Pipeline] sh 00:20:27.506 + vagrant ssh-config --host vagrant 00:20:27.506 + sed -ne /^Host/,$p 00:20:27.506 + tee ssh_conf 00:20:30.828 Host vagrant 00:20:30.828 HostName 192.168.121.88 00:20:30.828 User vagrant 00:20:30.828 Port 22 00:20:30.828 UserKnownHostsFile /dev/null 00:20:30.828 StrictHostKeyChecking no 00:20:30.828 PasswordAuthentication no 00:20:30.828 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:20:30.828 IdentitiesOnly yes 00:20:30.828 LogLevel FATAL 00:20:30.828 ForwardAgent yes 00:20:30.828 ForwardX11 yes 00:20:30.828 00:20:30.844 [Pipeline] withEnv 00:20:30.847 [Pipeline] { 00:20:30.863 [Pipeline] sh 00:20:31.145 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:20:31.146 source /etc/os-release 00:20:31.146 [[ -e /image.version ]] && img=$(< /image.version) 00:20:31.146 # Minimal, systemd-like check. 00:20:31.146 if [[ -e /.dockerenv ]]; then 00:20:31.146 # Clear garbage from the node's name: 00:20:31.146 # agt-er_autotest_547-896 -> autotest_547-896 00:20:31.146 # $HOSTNAME is the actual container id 00:20:31.146 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:20:31.146 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:20:31.146 # We can assume this is a mount from a host where container is running, 00:20:31.146 # so fetch its hostname to easily identify the target swarm worker. 00:20:31.146 container="$(< /etc/hostname) ($agent)" 00:20:31.146 else 00:20:31.146 # Fallback 00:20:31.146 container=$agent 00:20:31.146 fi 00:20:31.146 fi 00:20:31.146 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:20:31.146 00:20:31.418 [Pipeline] } 00:20:31.437 [Pipeline] // withEnv 00:20:31.445 [Pipeline] setCustomBuildProperty 00:20:31.460 [Pipeline] stage 00:20:31.463 [Pipeline] { (Tests) 00:20:31.480 [Pipeline] sh 00:20:31.762 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:20:32.035 [Pipeline] sh 00:20:32.318 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:20:32.593 [Pipeline] timeout 00:20:32.593 Timeout set to expire in 1 hr 30 min 00:20:32.595 [Pipeline] { 00:20:32.626 [Pipeline] sh 00:20:32.908 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:20:33.475 HEAD is now at b6a18b192 nvme/rdma: Don't limit max_sge if UMR is used 00:20:33.488 [Pipeline] sh 00:20:33.766 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:20:34.040 [Pipeline] sh 00:20:34.321 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:20:34.596 [Pipeline] sh 00:20:34.875 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:20:35.133 ++ readlink -f spdk_repo 00:20:35.133 + DIR_ROOT=/home/vagrant/spdk_repo 00:20:35.133 + [[ -n /home/vagrant/spdk_repo ]] 00:20:35.133 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:20:35.133 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:20:35.133 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:20:35.133 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:20:35.134 + [[ -d /home/vagrant/spdk_repo/output ]] 00:20:35.134 + [[ raid-vg-autotest == pkgdep-* ]] 00:20:35.134 + cd /home/vagrant/spdk_repo 00:20:35.134 + source /etc/os-release 00:20:35.134 ++ NAME='Fedora Linux' 00:20:35.134 ++ VERSION='39 (Cloud Edition)' 00:20:35.134 ++ ID=fedora 00:20:35.134 ++ VERSION_ID=39 00:20:35.134 ++ VERSION_CODENAME= 00:20:35.134 ++ PLATFORM_ID=platform:f39 00:20:35.134 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:20:35.134 ++ ANSI_COLOR='0;38;2;60;110;180' 00:20:35.134 ++ LOGO=fedora-logo-icon 00:20:35.134 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:20:35.134 ++ HOME_URL=https://fedoraproject.org/ 00:20:35.134 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:20:35.134 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:20:35.134 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:20:35.134 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:20:35.134 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:20:35.134 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:20:35.134 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:20:35.134 ++ SUPPORT_END=2024-11-12 00:20:35.134 ++ VARIANT='Cloud Edition' 00:20:35.134 ++ VARIANT_ID=cloud 00:20:35.134 + uname -a 00:20:35.134 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:20:35.134 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:35.700 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:35.700 Hugepages 00:20:35.700 node hugesize free / total 00:20:35.700 node0 1048576kB 0 / 0 00:20:35.700 node0 2048kB 0 / 0 00:20:35.700 00:20:35.700 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:35.700 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:20:35.700 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:20:35.700 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:20:35.700 + rm -f /tmp/spdk-ld-path 00:20:35.700 + source autorun-spdk.conf 00:20:35.700 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:35.700 ++ SPDK_RUN_ASAN=1 00:20:35.700 ++ SPDK_RUN_UBSAN=1 00:20:35.700 ++ SPDK_TEST_RAID=1 00:20:35.700 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:35.700 ++ RUN_NIGHTLY=0 00:20:35.700 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:20:35.700 + [[ -n '' ]] 00:20:35.700 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:20:35.959 + for M in /var/spdk/build-*-manifest.txt 00:20:35.959 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:20:35.959 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:35.959 + for M in /var/spdk/build-*-manifest.txt 00:20:35.959 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:20:35.959 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:35.959 + for M in /var/spdk/build-*-manifest.txt 00:20:35.959 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:20:35.959 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:20:35.959 ++ uname 00:20:35.959 + [[ Linux == \L\i\n\u\x ]] 00:20:35.959 + sudo dmesg -T 00:20:35.959 + sudo dmesg --clear 00:20:35.959 + dmesg_pid=5214 00:20:35.959 + sudo dmesg -Tw 00:20:35.959 + [[ Fedora Linux == FreeBSD ]] 00:20:35.959 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:35.959 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:20:35.959 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:20:35.959 + [[ -x /usr/src/fio-static/fio ]] 00:20:35.959 + export FIO_BIN=/usr/src/fio-static/fio 00:20:35.959 + FIO_BIN=/usr/src/fio-static/fio 00:20:35.959 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:20:35.959 + [[ ! -v VFIO_QEMU_BIN ]] 00:20:35.959 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:20:35.959 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:35.959 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:20:35.959 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:20:35.959 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:35.959 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:20:35.959 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:35.959 18:19:06 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:20:35.959 18:19:06 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:35.959 18:19:06 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:20:35.959 18:19:06 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:20:35.959 18:19:06 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:20:35.959 18:19:06 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:20:35.959 18:19:06 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:20:35.959 18:19:06 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:20:35.959 18:19:06 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:20:35.959 18:19:06 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:20:36.219 18:19:06 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:20:36.219 18:19:06 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:36.220 18:19:06 -- scripts/common.sh@15 -- $ shopt -s extglob 00:20:36.220 18:19:06 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:20:36.220 18:19:06 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:36.220 18:19:06 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:36.220 18:19:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.220 18:19:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.220 18:19:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.220 18:19:06 -- paths/export.sh@5 -- $ export PATH 00:20:36.220 18:19:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:36.220 18:19:06 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:20:36.220 18:19:06 -- common/autobuild_common.sh@493 -- $ date +%s 00:20:36.220 18:19:07 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733509147.XXXXXX 00:20:36.220 18:19:07 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733509147.jbLPnF 00:20:36.220 18:19:07 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:20:36.220 18:19:07 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:20:36.220 18:19:07 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:20:36.220 18:19:07 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:20:36.220 18:19:07 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:20:36.220 18:19:07 -- common/autobuild_common.sh@509 -- $ get_config_params 00:20:36.220 18:19:07 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:20:36.220 18:19:07 -- common/autotest_common.sh@10 -- $ set +x 00:20:36.220 18:19:07 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:20:36.220 18:19:07 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:20:36.220 18:19:07 -- pm/common@17 -- $ local monitor 00:20:36.220 18:19:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:36.220 18:19:07 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:20:36.220 18:19:07 -- pm/common@25 -- $ sleep 1 00:20:36.220 18:19:07 -- pm/common@21 -- $ date +%s 00:20:36.220 18:19:07 -- pm/common@21 -- $ date +%s 00:20:36.220 18:19:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733509147 00:20:36.220 18:19:07 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733509147 00:20:36.220 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733509147_collect-vmstat.pm.log 00:20:36.220 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733509147_collect-cpu-load.pm.log 00:20:37.157 18:19:08 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:20:37.157 18:19:08 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:20:37.157 18:19:08 -- spdk/autobuild.sh@12 -- $ umask 022 00:20:37.157 18:19:08 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:20:37.157 18:19:08 -- spdk/autobuild.sh@16 -- $ date -u 00:20:37.157 Fri Dec 6 06:19:08 PM UTC 2024 00:20:37.157 18:19:08 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:20:37.157 v25.01-pre-311-gb6a18b192 00:20:37.157 18:19:08 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:20:37.157 18:19:08 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:20:37.157 18:19:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:20:37.157 18:19:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:20:37.157 18:19:08 -- common/autotest_common.sh@10 -- $ set +x 00:20:37.157 ************************************ 00:20:37.157 START TEST asan 00:20:37.158 ************************************ 00:20:37.158 using asan 00:20:37.158 18:19:08 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:20:37.158 00:20:37.158 real 0m0.000s 00:20:37.158 user 0m0.000s 00:20:37.158 sys 0m0.000s 00:20:37.158 18:19:08 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:20:37.158 18:19:08 asan -- common/autotest_common.sh@10 -- $ set +x 00:20:37.158 ************************************ 00:20:37.158 END TEST asan 00:20:37.158 ************************************ 00:20:37.417 18:19:08 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:20:37.417 18:19:08 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:20:37.417 18:19:08 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:20:37.417 18:19:08 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:20:37.417 18:19:08 -- common/autotest_common.sh@10 -- $ set +x 00:20:37.417 ************************************ 00:20:37.417 START TEST ubsan 00:20:37.417 ************************************ 00:20:37.417 using ubsan 00:20:37.417 18:19:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:20:37.417 00:20:37.417 real 0m0.000s 00:20:37.417 user 0m0.000s 00:20:37.417 sys 0m0.000s 00:20:37.417 18:19:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:20:37.417 18:19:08 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:20:37.417 ************************************ 00:20:37.417 END TEST ubsan 00:20:37.417 ************************************ 00:20:37.417 18:19:08 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:20:37.417 18:19:08 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:20:37.417 18:19:08 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:20:37.417 18:19:08 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:20:37.417 18:19:08 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:20:37.417 18:19:08 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:20:37.417 18:19:08 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:20:37.417 18:19:08 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:20:37.417 18:19:08 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:20:37.677 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:20:37.677 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:20:38.284 Using 'verbs' RDMA provider 00:20:54.218 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:21:09.109 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:21:09.677 Creating mk/config.mk...done. 00:21:09.677 Creating mk/cc.flags.mk...done. 00:21:09.677 Type 'make' to build. 00:21:09.677 18:19:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:21:09.677 18:19:40 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:21:09.677 18:19:40 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:21:09.677 18:19:40 -- common/autotest_common.sh@10 -- $ set +x 00:21:09.677 ************************************ 00:21:09.677 START TEST make 00:21:09.677 ************************************ 00:21:09.677 18:19:40 make -- common/autotest_common.sh@1129 -- $ make -j10 00:21:10.244 make[1]: Nothing to be done for 'all'. 00:21:22.453 The Meson build system 00:21:22.453 Version: 1.5.0 00:21:22.453 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:21:22.453 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:21:22.453 Build type: native build 00:21:22.453 Program cat found: YES (/usr/bin/cat) 00:21:22.453 Project name: DPDK 00:21:22.453 Project version: 24.03.0 00:21:22.453 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:21:22.453 C linker for the host machine: cc ld.bfd 2.40-14 00:21:22.453 Host machine cpu family: x86_64 00:21:22.453 Host machine cpu: x86_64 00:21:22.453 Message: ## Building in Developer Mode ## 00:21:22.453 Program pkg-config found: YES (/usr/bin/pkg-config) 00:21:22.453 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:21:22.453 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:21:22.453 Program python3 found: YES (/usr/bin/python3) 00:21:22.453 Program cat found: YES (/usr/bin/cat) 00:21:22.453 Compiler for C supports arguments -march=native: YES 00:21:22.453 Checking for size of "void *" : 8 00:21:22.453 Checking for size of "void *" : 8 (cached) 00:21:22.453 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:21:22.453 Library m found: YES 00:21:22.453 Library numa found: YES 00:21:22.453 Has header "numaif.h" : YES 00:21:22.453 Library fdt found: NO 00:21:22.453 Library execinfo found: NO 00:21:22.453 Has header "execinfo.h" : YES 00:21:22.453 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:21:22.453 Run-time dependency libarchive found: NO (tried pkgconfig) 00:21:22.453 Run-time dependency libbsd found: NO (tried pkgconfig) 00:21:22.453 Run-time dependency jansson found: NO (tried pkgconfig) 00:21:22.453 Run-time dependency openssl found: YES 3.1.1 00:21:22.453 Run-time dependency libpcap found: YES 1.10.4 00:21:22.453 Has header "pcap.h" with dependency libpcap: YES 00:21:22.453 Compiler for C supports arguments -Wcast-qual: YES 00:21:22.453 Compiler for C supports arguments -Wdeprecated: YES 00:21:22.453 Compiler for C supports arguments -Wformat: YES 00:21:22.453 Compiler for C supports arguments -Wformat-nonliteral: NO 00:21:22.453 Compiler for C supports arguments -Wformat-security: NO 00:21:22.453 Compiler for C supports arguments -Wmissing-declarations: YES 00:21:22.453 Compiler for C supports arguments -Wmissing-prototypes: YES 00:21:22.453 Compiler for C supports arguments -Wnested-externs: YES 00:21:22.453 Compiler for C supports arguments -Wold-style-definition: YES 00:21:22.453 Compiler for C supports arguments -Wpointer-arith: YES 00:21:22.453 Compiler for C supports arguments -Wsign-compare: YES 00:21:22.453 Compiler for C supports arguments -Wstrict-prototypes: YES 00:21:22.453 Compiler for C supports arguments -Wundef: YES 00:21:22.453 Compiler for C supports arguments -Wwrite-strings: YES 00:21:22.453 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:21:22.453 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:21:22.453 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:21:22.453 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:21:22.453 Program objdump found: YES (/usr/bin/objdump) 00:21:22.453 Compiler for C supports arguments -mavx512f: YES 00:21:22.453 Checking if "AVX512 checking" compiles: YES 00:21:22.453 Fetching value of define "__SSE4_2__" : 1 00:21:22.453 Fetching value of define "__AES__" : 1 00:21:22.453 Fetching value of define "__AVX__" : 1 00:21:22.453 Fetching value of define "__AVX2__" : 1 00:21:22.453 Fetching value of define "__AVX512BW__" : 1 00:21:22.453 Fetching value of define "__AVX512CD__" : 1 00:21:22.453 Fetching value of define "__AVX512DQ__" : 1 00:21:22.453 Fetching value of define "__AVX512F__" : 1 00:21:22.453 Fetching value of define "__AVX512VL__" : 1 00:21:22.453 Fetching value of define "__PCLMUL__" : 1 00:21:22.453 Fetching value of define "__RDRND__" : 1 00:21:22.453 Fetching value of define "__RDSEED__" : 1 00:21:22.453 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:21:22.453 Fetching value of define "__znver1__" : (undefined) 00:21:22.453 Fetching value of define "__znver2__" : (undefined) 00:21:22.453 Fetching value of define "__znver3__" : (undefined) 00:21:22.453 Fetching value of define "__znver4__" : (undefined) 00:21:22.453 Library asan found: YES 00:21:22.453 Compiler for C supports arguments -Wno-format-truncation: YES 00:21:22.453 Message: lib/log: Defining dependency "log" 00:21:22.453 Message: lib/kvargs: Defining dependency "kvargs" 00:21:22.453 Message: lib/telemetry: Defining dependency "telemetry" 00:21:22.453 Library rt found: YES 00:21:22.453 Checking for function "getentropy" : NO 00:21:22.453 Message: lib/eal: Defining dependency "eal" 00:21:22.453 Message: lib/ring: Defining dependency "ring" 00:21:22.453 Message: lib/rcu: Defining dependency "rcu" 00:21:22.453 Message: lib/mempool: Defining dependency "mempool" 00:21:22.453 Message: lib/mbuf: Defining dependency "mbuf" 00:21:22.453 Fetching value of define "__PCLMUL__" : 1 (cached) 00:21:22.453 Fetching value of define "__AVX512F__" : 1 (cached) 00:21:22.453 Fetching value of define "__AVX512BW__" : 1 (cached) 00:21:22.453 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:21:22.453 Fetching value of define "__AVX512VL__" : 1 (cached) 00:21:22.453 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:21:22.453 Compiler for C supports arguments -mpclmul: YES 00:21:22.453 Compiler for C supports arguments -maes: YES 00:21:22.453 Compiler for C supports arguments -mavx512f: YES (cached) 00:21:22.453 Compiler for C supports arguments -mavx512bw: YES 00:21:22.453 Compiler for C supports arguments -mavx512dq: YES 00:21:22.453 Compiler for C supports arguments -mavx512vl: YES 00:21:22.453 Compiler for C supports arguments -mvpclmulqdq: YES 00:21:22.453 Compiler for C supports arguments -mavx2: YES 00:21:22.453 Compiler for C supports arguments -mavx: YES 00:21:22.453 Message: lib/net: Defining dependency "net" 00:21:22.453 Message: lib/meter: Defining dependency "meter" 00:21:22.453 Message: lib/ethdev: Defining dependency "ethdev" 00:21:22.453 Message: lib/pci: Defining dependency "pci" 00:21:22.453 Message: lib/cmdline: Defining dependency "cmdline" 00:21:22.453 Message: lib/hash: Defining dependency "hash" 00:21:22.453 Message: lib/timer: Defining dependency "timer" 00:21:22.453 Message: lib/compressdev: Defining dependency "compressdev" 00:21:22.453 Message: lib/cryptodev: Defining dependency "cryptodev" 00:21:22.453 Message: lib/dmadev: Defining dependency "dmadev" 00:21:22.453 Compiler for C supports arguments -Wno-cast-qual: YES 00:21:22.453 Message: lib/power: Defining dependency "power" 00:21:22.453 Message: lib/reorder: Defining dependency "reorder" 00:21:22.453 Message: lib/security: Defining dependency "security" 00:21:22.453 Has header "linux/userfaultfd.h" : YES 00:21:22.453 Has header "linux/vduse.h" : YES 00:21:22.453 Message: lib/vhost: Defining dependency "vhost" 00:21:22.453 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:21:22.453 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:21:22.453 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:21:22.453 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:21:22.453 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:21:22.453 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:21:22.453 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:21:22.453 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:21:22.453 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:21:22.453 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:21:22.453 Program doxygen found: YES (/usr/local/bin/doxygen) 00:21:22.453 Configuring doxy-api-html.conf using configuration 00:21:22.453 Configuring doxy-api-man.conf using configuration 00:21:22.453 Program mandb found: YES (/usr/bin/mandb) 00:21:22.453 Program sphinx-build found: NO 00:21:22.453 Configuring rte_build_config.h using configuration 00:21:22.453 Message: 00:21:22.453 ================= 00:21:22.453 Applications Enabled 00:21:22.453 ================= 00:21:22.453 00:21:22.453 apps: 00:21:22.453 00:21:22.453 00:21:22.453 Message: 00:21:22.453 ================= 00:21:22.453 Libraries Enabled 00:21:22.453 ================= 00:21:22.453 00:21:22.453 libs: 00:21:22.453 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:21:22.453 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:21:22.453 cryptodev, dmadev, power, reorder, security, vhost, 00:21:22.454 00:21:22.454 Message: 00:21:22.454 =============== 00:21:22.454 Drivers Enabled 00:21:22.454 =============== 00:21:22.454 00:21:22.454 common: 00:21:22.454 00:21:22.454 bus: 00:21:22.454 pci, vdev, 00:21:22.454 mempool: 00:21:22.454 ring, 00:21:22.454 dma: 00:21:22.454 00:21:22.454 net: 00:21:22.454 00:21:22.454 crypto: 00:21:22.454 00:21:22.454 compress: 00:21:22.454 00:21:22.454 vdpa: 00:21:22.454 00:21:22.454 00:21:22.454 Message: 00:21:22.454 ================= 00:21:22.454 Content Skipped 00:21:22.454 ================= 00:21:22.454 00:21:22.454 apps: 00:21:22.454 dumpcap: explicitly disabled via build config 00:21:22.454 graph: explicitly disabled via build config 00:21:22.454 pdump: explicitly disabled via build config 00:21:22.454 proc-info: explicitly disabled via build config 00:21:22.454 test-acl: explicitly disabled via build config 00:21:22.454 test-bbdev: explicitly disabled via build config 00:21:22.454 test-cmdline: explicitly disabled via build config 00:21:22.454 test-compress-perf: explicitly disabled via build config 00:21:22.454 test-crypto-perf: explicitly disabled via build config 00:21:22.454 test-dma-perf: explicitly disabled via build config 00:21:22.454 test-eventdev: explicitly disabled via build config 00:21:22.454 test-fib: explicitly disabled via build config 00:21:22.454 test-flow-perf: explicitly disabled via build config 00:21:22.454 test-gpudev: explicitly disabled via build config 00:21:22.454 test-mldev: explicitly disabled via build config 00:21:22.454 test-pipeline: explicitly disabled via build config 00:21:22.454 test-pmd: explicitly disabled via build config 00:21:22.454 test-regex: explicitly disabled via build config 00:21:22.454 test-sad: explicitly disabled via build config 00:21:22.454 test-security-perf: explicitly disabled via build config 00:21:22.454 00:21:22.454 libs: 00:21:22.454 argparse: explicitly disabled via build config 00:21:22.454 metrics: explicitly disabled via build config 00:21:22.454 acl: explicitly disabled via build config 00:21:22.454 bbdev: explicitly disabled via build config 00:21:22.454 bitratestats: explicitly disabled via build config 00:21:22.454 bpf: explicitly disabled via build config 00:21:22.454 cfgfile: explicitly disabled via build config 00:21:22.454 distributor: explicitly disabled via build config 00:21:22.454 efd: explicitly disabled via build config 00:21:22.454 eventdev: explicitly disabled via build config 00:21:22.454 dispatcher: explicitly disabled via build config 00:21:22.454 gpudev: explicitly disabled via build config 00:21:22.454 gro: explicitly disabled via build config 00:21:22.454 gso: explicitly disabled via build config 00:21:22.454 ip_frag: explicitly disabled via build config 00:21:22.454 jobstats: explicitly disabled via build config 00:21:22.454 latencystats: explicitly disabled via build config 00:21:22.454 lpm: explicitly disabled via build config 00:21:22.454 member: explicitly disabled via build config 00:21:22.454 pcapng: explicitly disabled via build config 00:21:22.454 rawdev: explicitly disabled via build config 00:21:22.454 regexdev: explicitly disabled via build config 00:21:22.454 mldev: explicitly disabled via build config 00:21:22.454 rib: explicitly disabled via build config 00:21:22.454 sched: explicitly disabled via build config 00:21:22.454 stack: explicitly disabled via build config 00:21:22.454 ipsec: explicitly disabled via build config 00:21:22.454 pdcp: explicitly disabled via build config 00:21:22.454 fib: explicitly disabled via build config 00:21:22.454 port: explicitly disabled via build config 00:21:22.454 pdump: explicitly disabled via build config 00:21:22.454 table: explicitly disabled via build config 00:21:22.454 pipeline: explicitly disabled via build config 00:21:22.454 graph: explicitly disabled via build config 00:21:22.454 node: explicitly disabled via build config 00:21:22.454 00:21:22.454 drivers: 00:21:22.454 common/cpt: not in enabled drivers build config 00:21:22.454 common/dpaax: not in enabled drivers build config 00:21:22.454 common/iavf: not in enabled drivers build config 00:21:22.454 common/idpf: not in enabled drivers build config 00:21:22.454 common/ionic: not in enabled drivers build config 00:21:22.454 common/mvep: not in enabled drivers build config 00:21:22.454 common/octeontx: not in enabled drivers build config 00:21:22.454 bus/auxiliary: not in enabled drivers build config 00:21:22.454 bus/cdx: not in enabled drivers build config 00:21:22.454 bus/dpaa: not in enabled drivers build config 00:21:22.454 bus/fslmc: not in enabled drivers build config 00:21:22.454 bus/ifpga: not in enabled drivers build config 00:21:22.454 bus/platform: not in enabled drivers build config 00:21:22.454 bus/uacce: not in enabled drivers build config 00:21:22.454 bus/vmbus: not in enabled drivers build config 00:21:22.454 common/cnxk: not in enabled drivers build config 00:21:22.454 common/mlx5: not in enabled drivers build config 00:21:22.454 common/nfp: not in enabled drivers build config 00:21:22.454 common/nitrox: not in enabled drivers build config 00:21:22.454 common/qat: not in enabled drivers build config 00:21:22.454 common/sfc_efx: not in enabled drivers build config 00:21:22.454 mempool/bucket: not in enabled drivers build config 00:21:22.454 mempool/cnxk: not in enabled drivers build config 00:21:22.454 mempool/dpaa: not in enabled drivers build config 00:21:22.454 mempool/dpaa2: not in enabled drivers build config 00:21:22.454 mempool/octeontx: not in enabled drivers build config 00:21:22.454 mempool/stack: not in enabled drivers build config 00:21:22.454 dma/cnxk: not in enabled drivers build config 00:21:22.454 dma/dpaa: not in enabled drivers build config 00:21:22.454 dma/dpaa2: not in enabled drivers build config 00:21:22.454 dma/hisilicon: not in enabled drivers build config 00:21:22.454 dma/idxd: not in enabled drivers build config 00:21:22.454 dma/ioat: not in enabled drivers build config 00:21:22.454 dma/skeleton: not in enabled drivers build config 00:21:22.454 net/af_packet: not in enabled drivers build config 00:21:22.454 net/af_xdp: not in enabled drivers build config 00:21:22.454 net/ark: not in enabled drivers build config 00:21:22.454 net/atlantic: not in enabled drivers build config 00:21:22.454 net/avp: not in enabled drivers build config 00:21:22.454 net/axgbe: not in enabled drivers build config 00:21:22.454 net/bnx2x: not in enabled drivers build config 00:21:22.454 net/bnxt: not in enabled drivers build config 00:21:22.454 net/bonding: not in enabled drivers build config 00:21:22.454 net/cnxk: not in enabled drivers build config 00:21:22.454 net/cpfl: not in enabled drivers build config 00:21:22.454 net/cxgbe: not in enabled drivers build config 00:21:22.454 net/dpaa: not in enabled drivers build config 00:21:22.454 net/dpaa2: not in enabled drivers build config 00:21:22.454 net/e1000: not in enabled drivers build config 00:21:22.454 net/ena: not in enabled drivers build config 00:21:22.454 net/enetc: not in enabled drivers build config 00:21:22.454 net/enetfec: not in enabled drivers build config 00:21:22.454 net/enic: not in enabled drivers build config 00:21:22.454 net/failsafe: not in enabled drivers build config 00:21:22.454 net/fm10k: not in enabled drivers build config 00:21:22.454 net/gve: not in enabled drivers build config 00:21:22.454 net/hinic: not in enabled drivers build config 00:21:22.454 net/hns3: not in enabled drivers build config 00:21:22.454 net/i40e: not in enabled drivers build config 00:21:22.454 net/iavf: not in enabled drivers build config 00:21:22.454 net/ice: not in enabled drivers build config 00:21:22.454 net/idpf: not in enabled drivers build config 00:21:22.454 net/igc: not in enabled drivers build config 00:21:22.454 net/ionic: not in enabled drivers build config 00:21:22.454 net/ipn3ke: not in enabled drivers build config 00:21:22.454 net/ixgbe: not in enabled drivers build config 00:21:22.454 net/mana: not in enabled drivers build config 00:21:22.454 net/memif: not in enabled drivers build config 00:21:22.454 net/mlx4: not in enabled drivers build config 00:21:22.454 net/mlx5: not in enabled drivers build config 00:21:22.454 net/mvneta: not in enabled drivers build config 00:21:22.454 net/mvpp2: not in enabled drivers build config 00:21:22.454 net/netvsc: not in enabled drivers build config 00:21:22.454 net/nfb: not in enabled drivers build config 00:21:22.454 net/nfp: not in enabled drivers build config 00:21:22.454 net/ngbe: not in enabled drivers build config 00:21:22.454 net/null: not in enabled drivers build config 00:21:22.454 net/octeontx: not in enabled drivers build config 00:21:22.454 net/octeon_ep: not in enabled drivers build config 00:21:22.454 net/pcap: not in enabled drivers build config 00:21:22.454 net/pfe: not in enabled drivers build config 00:21:22.454 net/qede: not in enabled drivers build config 00:21:22.454 net/ring: not in enabled drivers build config 00:21:22.454 net/sfc: not in enabled drivers build config 00:21:22.454 net/softnic: not in enabled drivers build config 00:21:22.454 net/tap: not in enabled drivers build config 00:21:22.454 net/thunderx: not in enabled drivers build config 00:21:22.454 net/txgbe: not in enabled drivers build config 00:21:22.454 net/vdev_netvsc: not in enabled drivers build config 00:21:22.454 net/vhost: not in enabled drivers build config 00:21:22.454 net/virtio: not in enabled drivers build config 00:21:22.454 net/vmxnet3: not in enabled drivers build config 00:21:22.454 raw/*: missing internal dependency, "rawdev" 00:21:22.454 crypto/armv8: not in enabled drivers build config 00:21:22.454 crypto/bcmfs: not in enabled drivers build config 00:21:22.454 crypto/caam_jr: not in enabled drivers build config 00:21:22.454 crypto/ccp: not in enabled drivers build config 00:21:22.454 crypto/cnxk: not in enabled drivers build config 00:21:22.454 crypto/dpaa_sec: not in enabled drivers build config 00:21:22.454 crypto/dpaa2_sec: not in enabled drivers build config 00:21:22.454 crypto/ipsec_mb: not in enabled drivers build config 00:21:22.454 crypto/mlx5: not in enabled drivers build config 00:21:22.454 crypto/mvsam: not in enabled drivers build config 00:21:22.454 crypto/nitrox: not in enabled drivers build config 00:21:22.454 crypto/null: not in enabled drivers build config 00:21:22.454 crypto/octeontx: not in enabled drivers build config 00:21:22.454 crypto/openssl: not in enabled drivers build config 00:21:22.454 crypto/scheduler: not in enabled drivers build config 00:21:22.454 crypto/uadk: not in enabled drivers build config 00:21:22.454 crypto/virtio: not in enabled drivers build config 00:21:22.455 compress/isal: not in enabled drivers build config 00:21:22.455 compress/mlx5: not in enabled drivers build config 00:21:22.455 compress/nitrox: not in enabled drivers build config 00:21:22.455 compress/octeontx: not in enabled drivers build config 00:21:22.455 compress/zlib: not in enabled drivers build config 00:21:22.455 regex/*: missing internal dependency, "regexdev" 00:21:22.455 ml/*: missing internal dependency, "mldev" 00:21:22.455 vdpa/ifc: not in enabled drivers build config 00:21:22.455 vdpa/mlx5: not in enabled drivers build config 00:21:22.455 vdpa/nfp: not in enabled drivers build config 00:21:22.455 vdpa/sfc: not in enabled drivers build config 00:21:22.455 event/*: missing internal dependency, "eventdev" 00:21:22.455 baseband/*: missing internal dependency, "bbdev" 00:21:22.455 gpu/*: missing internal dependency, "gpudev" 00:21:22.455 00:21:22.455 00:21:22.455 Build targets in project: 85 00:21:22.455 00:21:22.455 DPDK 24.03.0 00:21:22.455 00:21:22.455 User defined options 00:21:22.455 buildtype : debug 00:21:22.455 default_library : shared 00:21:22.455 libdir : lib 00:21:22.455 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:21:22.455 b_sanitize : address 00:21:22.455 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:21:22.455 c_link_args : 00:21:22.455 cpu_instruction_set: native 00:21:22.455 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:21:22.455 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:21:22.455 enable_docs : false 00:21:22.455 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:21:22.455 enable_kmods : false 00:21:22.455 max_lcores : 128 00:21:22.455 tests : false 00:21:22.455 00:21:22.455 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:21:22.455 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:21:22.455 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:21:22.455 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:21:22.455 [3/268] Linking static target lib/librte_kvargs.a 00:21:22.455 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:21:22.455 [5/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:21:22.455 [6/268] Linking static target lib/librte_log.a 00:21:22.455 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:21:22.455 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:21:22.455 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:21:22.455 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:21:22.455 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:21:22.455 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:21:22.455 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:21:22.455 [14/268] Linking static target lib/librte_telemetry.a 00:21:22.455 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:21:22.455 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:21:22.455 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:21:22.455 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:21:22.455 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:21:22.455 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:21:22.455 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:21:22.718 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:21:22.718 [23/268] Linking target lib/librte_log.so.24.1 00:21:22.718 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:21:22.718 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:21:22.718 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:21:22.718 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:21:22.718 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:21:22.718 [29/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:21:22.977 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:21:22.977 [31/268] Linking target lib/librte_kvargs.so.24.1 00:21:22.977 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:21:22.977 [33/268] Linking target lib/librte_telemetry.so.24.1 00:21:22.977 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:21:23.237 [35/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:21:23.237 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:21:23.237 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:21:23.238 [38/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:21:23.238 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:21:23.238 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:21:23.238 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:21:23.238 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:21:23.238 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:21:23.580 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:21:23.580 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:21:23.580 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:21:23.580 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:21:23.866 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:21:23.866 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:21:23.866 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:21:23.866 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:21:23.866 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:21:23.866 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:21:24.125 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:21:24.125 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:21:24.125 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:21:24.125 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:21:24.125 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:21:24.384 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:21:24.384 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:21:24.384 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:21:24.384 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:21:24.384 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:21:24.384 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:21:24.384 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:21:24.643 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:21:24.643 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:21:24.643 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:21:24.643 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:21:24.643 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:21:24.902 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:21:24.902 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:21:24.902 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:21:24.902 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:21:24.902 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:21:24.902 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:21:24.902 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:21:24.902 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:21:25.160 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:21:25.160 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:21:25.160 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:21:25.160 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:21:25.161 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:21:25.161 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:21:25.420 [85/268] Linking static target lib/librte_eal.a 00:21:25.420 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:21:25.420 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:21:25.420 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:21:25.420 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:21:25.420 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:21:25.420 [91/268] Linking static target lib/librte_ring.a 00:21:25.679 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:21:25.679 [93/268] Linking static target lib/librte_mempool.a 00:21:25.679 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:21:25.679 [95/268] Linking static target lib/librte_rcu.a 00:21:25.679 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:21:25.938 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:21:25.938 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:21:25.938 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:21:25.938 [100/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:21:25.938 [101/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:21:26.197 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:21:26.197 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:21:26.197 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:21:26.197 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:21:26.197 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:21:26.197 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:21:26.197 [108/268] Linking static target lib/librte_net.a 00:21:26.197 [109/268] Linking static target lib/librte_mbuf.a 00:21:26.457 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:21:26.457 [111/268] Linking static target lib/librte_meter.a 00:21:26.457 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:21:26.716 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:21:26.716 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:21:26.716 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:21:26.716 [116/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:21:26.716 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:21:26.975 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:21:26.975 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:21:27.235 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:21:27.235 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:21:27.495 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:21:27.495 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:21:27.495 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:21:27.495 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:21:27.754 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:21:27.754 [127/268] Linking static target lib/librte_pci.a 00:21:27.754 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:21:27.754 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:21:27.754 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:21:27.754 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:21:28.014 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:21:28.014 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:21:28.014 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:21:28.014 [135/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:28.014 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:21:28.014 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:21:28.014 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:21:28.014 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:21:28.014 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:21:28.014 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:21:28.274 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:21:28.274 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:21:28.274 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:21:28.274 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:21:28.274 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:21:28.274 [147/268] Linking static target lib/librte_cmdline.a 00:21:28.274 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:21:28.533 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:21:28.533 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:21:28.533 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:21:28.793 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:21:28.793 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:21:28.793 [154/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:21:28.793 [155/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:21:28.793 [156/268] Linking static target lib/librte_timer.a 00:21:28.793 [157/268] Linking static target lib/librte_ethdev.a 00:21:29.052 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:21:29.052 [159/268] Linking static target lib/librte_compressdev.a 00:21:29.052 [160/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:21:29.052 [161/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:21:29.052 [162/268] Linking static target lib/librte_hash.a 00:21:29.052 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:21:29.311 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:21:29.311 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:21:29.569 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:21:29.569 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:21:29.569 [168/268] Linking static target lib/librte_dmadev.a 00:21:29.569 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:21:29.569 [170/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:21:29.569 [171/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:21:29.828 [172/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:21:29.828 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:21:30.087 [174/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:21:30.087 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:30.087 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:21:30.087 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:21:30.087 [178/268] Linking static target lib/librte_cryptodev.a 00:21:30.087 [179/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:21:30.346 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:21:30.346 [181/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:21:30.346 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:21:30.346 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:30.346 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:21:30.346 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:21:30.604 [186/268] Linking static target lib/librte_power.a 00:21:30.863 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:21:30.863 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:21:30.863 [189/268] Linking static target lib/librte_reorder.a 00:21:30.863 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:21:30.863 [191/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:21:30.863 [192/268] Linking static target lib/librte_security.a 00:21:30.863 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:21:31.461 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:21:31.461 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:21:31.461 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:21:31.720 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:21:31.720 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:21:31.720 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:21:31.720 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:21:31.979 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:21:31.979 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:21:32.238 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:21:32.238 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:21:32.238 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:21:32.497 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:21:32.497 [207/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:21:32.497 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:21:32.497 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:21:32.497 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:21:32.497 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:32.756 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:21:32.756 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:21:32.756 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:21:32.756 [215/268] Linking static target drivers/librte_bus_vdev.a 00:21:32.756 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:21:32.756 [217/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:21:32.756 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:21:32.756 [219/268] Linking static target drivers/librte_bus_pci.a 00:21:32.756 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:21:32.756 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:21:33.015 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:21:33.015 [223/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:33.015 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:21:33.015 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:21:33.015 [226/268] Linking static target drivers/librte_mempool_ring.a 00:21:33.274 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:21:33.843 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:21:38.030 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:21:38.030 [230/268] Linking static target lib/librte_vhost.a 00:21:38.030 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:21:38.030 [232/268] Linking target lib/librte_eal.so.24.1 00:21:38.030 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:21:38.030 [234/268] Linking target lib/librte_dmadev.so.24.1 00:21:38.030 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:21:38.030 [236/268] Linking target lib/librte_meter.so.24.1 00:21:38.030 [237/268] Linking target lib/librte_pci.so.24.1 00:21:38.030 [238/268] Linking target lib/librte_ring.so.24.1 00:21:38.030 [239/268] Linking target lib/librte_timer.so.24.1 00:21:38.030 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:21:38.030 [241/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:21:38.030 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:21:38.030 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:21:38.030 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:21:38.030 [245/268] Linking target lib/librte_rcu.so.24.1 00:21:38.030 [246/268] Linking target lib/librte_mempool.so.24.1 00:21:38.030 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:21:38.290 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:21:38.290 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:21:38.290 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:21:38.290 [251/268] Linking target drivers/librte_mempool_ring.so.24.1 00:21:38.290 [252/268] Linking target lib/librte_mbuf.so.24.1 00:21:38.290 [253/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:21:38.548 [254/268] Linking target lib/librte_compressdev.so.24.1 00:21:38.548 [255/268] Linking target lib/librte_net.so.24.1 00:21:38.548 [256/268] Linking target lib/librte_reorder.so.24.1 00:21:38.548 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:21:38.548 [258/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:21:38.548 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:21:38.548 [260/268] Linking target lib/librte_security.so.24.1 00:21:38.548 [261/268] Linking target lib/librte_hash.so.24.1 00:21:38.548 [262/268] Linking target lib/librte_cmdline.so.24.1 00:21:38.548 [263/268] Linking target lib/librte_ethdev.so.24.1 00:21:38.807 [264/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:21:38.807 [265/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:21:38.807 [266/268] Linking target lib/librte_power.so.24.1 00:21:39.374 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:21:39.374 [268/268] Linking target lib/librte_vhost.so.24.1 00:21:39.633 INFO: autodetecting backend as ninja 00:21:39.633 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:21:54.509 CC lib/ut_mock/mock.o 00:21:54.509 CC lib/log/log_flags.o 00:21:54.509 CC lib/log/log.o 00:21:54.509 CC lib/log/log_deprecated.o 00:21:54.509 CC lib/ut/ut.o 00:21:54.509 LIB libspdk_log.a 00:21:54.509 LIB libspdk_ut_mock.a 00:21:54.509 LIB libspdk_ut.a 00:21:54.509 SO libspdk_log.so.7.1 00:21:54.509 SO libspdk_ut_mock.so.6.0 00:21:54.509 SO libspdk_ut.so.2.0 00:21:54.768 SYMLINK libspdk_ut_mock.so 00:21:54.768 SYMLINK libspdk_log.so 00:21:54.768 SYMLINK libspdk_ut.so 00:21:55.059 CC lib/dma/dma.o 00:21:55.059 CC lib/ioat/ioat.o 00:21:55.059 CXX lib/trace_parser/trace.o 00:21:55.059 CC lib/util/base64.o 00:21:55.059 CC lib/util/bit_array.o 00:21:55.059 CC lib/util/cpuset.o 00:21:55.059 CC lib/util/crc32.o 00:21:55.059 CC lib/util/crc32c.o 00:21:55.059 CC lib/util/crc16.o 00:21:55.059 CC lib/vfio_user/host/vfio_user_pci.o 00:21:55.059 CC lib/vfio_user/host/vfio_user.o 00:21:55.059 CC lib/util/crc32_ieee.o 00:21:55.059 LIB libspdk_dma.a 00:21:55.059 CC lib/util/crc64.o 00:21:55.059 CC lib/util/dif.o 00:21:55.059 SO libspdk_dma.so.5.0 00:21:55.317 CC lib/util/fd.o 00:21:55.318 LIB libspdk_ioat.a 00:21:55.318 SYMLINK libspdk_dma.so 00:21:55.318 CC lib/util/fd_group.o 00:21:55.318 CC lib/util/file.o 00:21:55.318 SO libspdk_ioat.so.7.0 00:21:55.318 CC lib/util/hexlify.o 00:21:55.318 CC lib/util/iov.o 00:21:55.318 SYMLINK libspdk_ioat.so 00:21:55.318 CC lib/util/math.o 00:21:55.318 CC lib/util/net.o 00:21:55.318 LIB libspdk_vfio_user.a 00:21:55.318 SO libspdk_vfio_user.so.5.0 00:21:55.318 CC lib/util/pipe.o 00:21:55.318 CC lib/util/strerror_tls.o 00:21:55.318 CC lib/util/string.o 00:21:55.318 SYMLINK libspdk_vfio_user.so 00:21:55.318 CC lib/util/uuid.o 00:21:55.318 CC lib/util/xor.o 00:21:55.318 CC lib/util/zipf.o 00:21:55.318 CC lib/util/md5.o 00:21:55.884 LIB libspdk_util.a 00:21:55.884 LIB libspdk_trace_parser.a 00:21:55.884 SO libspdk_util.so.10.1 00:21:55.884 SO libspdk_trace_parser.so.6.0 00:21:56.144 SYMLINK libspdk_util.so 00:21:56.144 SYMLINK libspdk_trace_parser.so 00:21:56.144 CC lib/vmd/vmd.o 00:21:56.144 CC lib/vmd/led.o 00:21:56.403 CC lib/rdma_utils/rdma_utils.o 00:21:56.403 CC lib/idxd/idxd.o 00:21:56.403 CC lib/json/json_parse.o 00:21:56.403 CC lib/conf/conf.o 00:21:56.403 CC lib/idxd/idxd_kernel.o 00:21:56.403 CC lib/idxd/idxd_user.o 00:21:56.403 CC lib/json/json_util.o 00:21:56.403 CC lib/env_dpdk/env.o 00:21:56.403 CC lib/env_dpdk/memory.o 00:21:56.403 CC lib/env_dpdk/pci.o 00:21:56.403 LIB libspdk_conf.a 00:21:56.403 CC lib/json/json_write.o 00:21:56.403 SO libspdk_conf.so.6.0 00:21:56.662 CC lib/env_dpdk/init.o 00:21:56.662 CC lib/env_dpdk/threads.o 00:21:56.662 SYMLINK libspdk_conf.so 00:21:56.662 CC lib/env_dpdk/pci_ioat.o 00:21:56.662 LIB libspdk_rdma_utils.a 00:21:56.662 SO libspdk_rdma_utils.so.1.0 00:21:56.662 CC lib/env_dpdk/pci_virtio.o 00:21:56.662 SYMLINK libspdk_rdma_utils.so 00:21:56.662 CC lib/env_dpdk/pci_vmd.o 00:21:56.662 CC lib/env_dpdk/pci_idxd.o 00:21:56.921 LIB libspdk_json.a 00:21:56.921 CC lib/env_dpdk/pci_event.o 00:21:56.921 CC lib/env_dpdk/sigbus_handler.o 00:21:56.921 CC lib/env_dpdk/pci_dpdk.o 00:21:56.921 SO libspdk_json.so.6.0 00:21:56.921 CC lib/env_dpdk/pci_dpdk_2207.o 00:21:56.921 SYMLINK libspdk_json.so 00:21:56.921 CC lib/env_dpdk/pci_dpdk_2211.o 00:21:56.921 LIB libspdk_idxd.a 00:21:56.921 LIB libspdk_vmd.a 00:21:56.921 SO libspdk_idxd.so.12.1 00:21:56.921 SO libspdk_vmd.so.6.0 00:21:56.921 SYMLINK libspdk_idxd.so 00:21:56.921 CC lib/rdma_provider/common.o 00:21:56.921 CC lib/rdma_provider/rdma_provider_verbs.o 00:21:57.180 SYMLINK libspdk_vmd.so 00:21:57.181 CC lib/jsonrpc/jsonrpc_server.o 00:21:57.181 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:21:57.181 CC lib/jsonrpc/jsonrpc_client.o 00:21:57.181 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:21:57.181 LIB libspdk_rdma_provider.a 00:21:57.441 SO libspdk_rdma_provider.so.7.0 00:21:57.441 SYMLINK libspdk_rdma_provider.so 00:21:57.441 LIB libspdk_jsonrpc.a 00:21:57.441 SO libspdk_jsonrpc.so.6.0 00:21:57.700 SYMLINK libspdk_jsonrpc.so 00:21:57.960 LIB libspdk_env_dpdk.a 00:21:57.960 SO libspdk_env_dpdk.so.15.1 00:21:57.960 CC lib/rpc/rpc.o 00:21:58.220 SYMLINK libspdk_env_dpdk.so 00:21:58.220 LIB libspdk_rpc.a 00:21:58.220 SO libspdk_rpc.so.6.0 00:21:58.220 SYMLINK libspdk_rpc.so 00:21:58.789 CC lib/notify/notify.o 00:21:58.789 CC lib/notify/notify_rpc.o 00:21:58.790 CC lib/trace/trace_flags.o 00:21:58.790 CC lib/trace/trace_rpc.o 00:21:58.790 CC lib/trace/trace.o 00:21:58.790 CC lib/keyring/keyring.o 00:21:58.790 CC lib/keyring/keyring_rpc.o 00:21:58.790 LIB libspdk_notify.a 00:21:59.050 SO libspdk_notify.so.6.0 00:21:59.050 LIB libspdk_keyring.a 00:21:59.050 LIB libspdk_trace.a 00:21:59.050 SYMLINK libspdk_notify.so 00:21:59.050 SO libspdk_keyring.so.2.0 00:21:59.050 SO libspdk_trace.so.11.0 00:21:59.050 SYMLINK libspdk_keyring.so 00:21:59.050 SYMLINK libspdk_trace.so 00:21:59.620 CC lib/thread/thread.o 00:21:59.620 CC lib/thread/iobuf.o 00:21:59.620 CC lib/sock/sock.o 00:21:59.620 CC lib/sock/sock_rpc.o 00:21:59.880 LIB libspdk_sock.a 00:22:00.140 SO libspdk_sock.so.10.0 00:22:00.140 SYMLINK libspdk_sock.so 00:22:00.399 CC lib/nvme/nvme_ctrlr_cmd.o 00:22:00.399 CC lib/nvme/nvme_ctrlr.o 00:22:00.399 CC lib/nvme/nvme_ns_cmd.o 00:22:00.399 CC lib/nvme/nvme_fabric.o 00:22:00.399 CC lib/nvme/nvme_ns.o 00:22:00.399 CC lib/nvme/nvme_pcie.o 00:22:00.399 CC lib/nvme/nvme_pcie_common.o 00:22:00.399 CC lib/nvme/nvme_qpair.o 00:22:00.399 CC lib/nvme/nvme.o 00:22:01.338 LIB libspdk_thread.a 00:22:01.338 SO libspdk_thread.so.11.0 00:22:01.338 CC lib/nvme/nvme_quirks.o 00:22:01.338 SYMLINK libspdk_thread.so 00:22:01.338 CC lib/nvme/nvme_transport.o 00:22:01.338 CC lib/nvme/nvme_discovery.o 00:22:01.338 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:22:01.338 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:22:01.338 CC lib/nvme/nvme_tcp.o 00:22:01.338 CC lib/nvme/nvme_opal.o 00:22:01.339 CC lib/nvme/nvme_io_msg.o 00:22:01.598 CC lib/nvme/nvme_poll_group.o 00:22:01.598 CC lib/nvme/nvme_zns.o 00:22:01.859 CC lib/nvme/nvme_stubs.o 00:22:01.859 CC lib/nvme/nvme_auth.o 00:22:01.859 CC lib/nvme/nvme_cuse.o 00:22:01.859 CC lib/nvme/nvme_rdma.o 00:22:02.119 CC lib/accel/accel.o 00:22:02.119 CC lib/blob/blobstore.o 00:22:02.119 CC lib/blob/request.o 00:22:02.119 CC lib/blob/zeroes.o 00:22:02.379 CC lib/init/json_config.o 00:22:02.379 CC lib/init/subsystem.o 00:22:02.379 CC lib/blob/blob_bs_dev.o 00:22:02.639 CC lib/accel/accel_rpc.o 00:22:02.639 CC lib/init/subsystem_rpc.o 00:22:02.639 CC lib/accel/accel_sw.o 00:22:02.639 CC lib/init/rpc.o 00:22:02.898 CC lib/virtio/virtio.o 00:22:02.898 CC lib/virtio/virtio_vhost_user.o 00:22:02.898 CC lib/virtio/virtio_vfio_user.o 00:22:02.898 LIB libspdk_init.a 00:22:02.898 CC lib/virtio/virtio_pci.o 00:22:02.898 CC lib/fsdev/fsdev.o 00:22:02.898 SO libspdk_init.so.6.0 00:22:03.158 SYMLINK libspdk_init.so 00:22:03.158 CC lib/fsdev/fsdev_io.o 00:22:03.158 CC lib/fsdev/fsdev_rpc.o 00:22:03.158 LIB libspdk_virtio.a 00:22:03.158 SO libspdk_virtio.so.7.0 00:22:03.158 LIB libspdk_accel.a 00:22:03.418 SYMLINK libspdk_virtio.so 00:22:03.418 SO libspdk_accel.so.16.0 00:22:03.418 CC lib/event/app.o 00:22:03.418 CC lib/event/app_rpc.o 00:22:03.418 CC lib/event/log_rpc.o 00:22:03.418 CC lib/event/reactor.o 00:22:03.418 CC lib/event/scheduler_static.o 00:22:03.418 LIB libspdk_nvme.a 00:22:03.418 SYMLINK libspdk_accel.so 00:22:03.678 SO libspdk_nvme.so.15.0 00:22:03.679 LIB libspdk_fsdev.a 00:22:03.679 SO libspdk_fsdev.so.2.0 00:22:03.679 CC lib/bdev/bdev_rpc.o 00:22:03.679 CC lib/bdev/bdev.o 00:22:03.679 CC lib/bdev/bdev_zone.o 00:22:03.679 CC lib/bdev/part.o 00:22:03.679 CC lib/bdev/scsi_nvme.o 00:22:03.679 SYMLINK libspdk_fsdev.so 00:22:03.939 SYMLINK libspdk_nvme.so 00:22:03.939 LIB libspdk_event.a 00:22:03.939 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:22:03.939 SO libspdk_event.so.14.0 00:22:03.939 SYMLINK libspdk_event.so 00:22:04.508 LIB libspdk_fuse_dispatcher.a 00:22:04.508 SO libspdk_fuse_dispatcher.so.1.0 00:22:04.768 SYMLINK libspdk_fuse_dispatcher.so 00:22:05.706 LIB libspdk_blob.a 00:22:05.706 SO libspdk_blob.so.12.0 00:22:05.706 SYMLINK libspdk_blob.so 00:22:06.275 CC lib/lvol/lvol.o 00:22:06.275 CC lib/blobfs/blobfs.o 00:22:06.275 CC lib/blobfs/tree.o 00:22:06.534 LIB libspdk_bdev.a 00:22:06.794 SO libspdk_bdev.so.17.0 00:22:06.794 SYMLINK libspdk_bdev.so 00:22:07.053 LIB libspdk_blobfs.a 00:22:07.053 CC lib/nbd/nbd_rpc.o 00:22:07.053 CC lib/nbd/nbd.o 00:22:07.053 CC lib/scsi/dev.o 00:22:07.053 CC lib/scsi/lun.o 00:22:07.053 CC lib/ublk/ublk.o 00:22:07.053 CC lib/scsi/port.o 00:22:07.053 CC lib/nvmf/ctrlr.o 00:22:07.053 CC lib/ftl/ftl_core.o 00:22:07.053 SO libspdk_blobfs.so.11.0 00:22:07.312 SYMLINK libspdk_blobfs.so 00:22:07.312 CC lib/ftl/ftl_init.o 00:22:07.312 LIB libspdk_lvol.a 00:22:07.312 CC lib/ftl/ftl_layout.o 00:22:07.312 SO libspdk_lvol.so.11.0 00:22:07.312 CC lib/ublk/ublk_rpc.o 00:22:07.312 CC lib/nvmf/ctrlr_discovery.o 00:22:07.312 SYMLINK libspdk_lvol.so 00:22:07.312 CC lib/nvmf/ctrlr_bdev.o 00:22:07.312 CC lib/scsi/scsi.o 00:22:07.312 CC lib/scsi/scsi_bdev.o 00:22:07.571 CC lib/scsi/scsi_pr.o 00:22:07.571 CC lib/scsi/scsi_rpc.o 00:22:07.571 CC lib/nvmf/subsystem.o 00:22:07.571 LIB libspdk_nbd.a 00:22:07.571 SO libspdk_nbd.so.7.0 00:22:07.571 CC lib/ftl/ftl_debug.o 00:22:07.571 SYMLINK libspdk_nbd.so 00:22:07.571 CC lib/ftl/ftl_io.o 00:22:07.571 CC lib/ftl/ftl_sb.o 00:22:07.831 LIB libspdk_ublk.a 00:22:07.831 SO libspdk_ublk.so.3.0 00:22:07.831 CC lib/scsi/task.o 00:22:07.831 CC lib/ftl/ftl_l2p.o 00:22:07.831 CC lib/nvmf/nvmf.o 00:22:07.831 SYMLINK libspdk_ublk.so 00:22:07.831 CC lib/nvmf/nvmf_rpc.o 00:22:07.831 CC lib/ftl/ftl_l2p_flat.o 00:22:07.831 CC lib/ftl/ftl_nv_cache.o 00:22:07.831 CC lib/ftl/ftl_band.o 00:22:08.091 LIB libspdk_scsi.a 00:22:08.091 CC lib/nvmf/transport.o 00:22:08.091 SO libspdk_scsi.so.9.0 00:22:08.091 CC lib/ftl/ftl_band_ops.o 00:22:08.091 CC lib/nvmf/tcp.o 00:22:08.091 SYMLINK libspdk_scsi.so 00:22:08.091 CC lib/ftl/ftl_writer.o 00:22:08.349 CC lib/nvmf/stubs.o 00:22:08.608 CC lib/iscsi/conn.o 00:22:08.608 CC lib/vhost/vhost.o 00:22:08.608 CC lib/vhost/vhost_rpc.o 00:22:08.874 CC lib/vhost/vhost_scsi.o 00:22:08.874 CC lib/vhost/vhost_blk.o 00:22:08.874 CC lib/vhost/rte_vhost_user.o 00:22:08.874 CC lib/ftl/ftl_rq.o 00:22:08.874 CC lib/nvmf/mdns_server.o 00:22:08.874 CC lib/nvmf/rdma.o 00:22:09.161 CC lib/ftl/ftl_reloc.o 00:22:09.161 CC lib/iscsi/init_grp.o 00:22:09.420 CC lib/ftl/ftl_l2p_cache.o 00:22:09.420 CC lib/nvmf/auth.o 00:22:09.420 CC lib/iscsi/iscsi.o 00:22:09.420 CC lib/iscsi/param.o 00:22:09.420 CC lib/ftl/ftl_p2l.o 00:22:09.679 CC lib/ftl/ftl_p2l_log.o 00:22:09.679 CC lib/iscsi/portal_grp.o 00:22:09.679 CC lib/iscsi/tgt_node.o 00:22:09.679 CC lib/ftl/mngt/ftl_mngt.o 00:22:09.938 LIB libspdk_vhost.a 00:22:09.938 CC lib/iscsi/iscsi_subsystem.o 00:22:09.938 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:22:09.938 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:22:09.938 SO libspdk_vhost.so.8.0 00:22:09.938 CC lib/ftl/mngt/ftl_mngt_startup.o 00:22:09.938 SYMLINK libspdk_vhost.so 00:22:09.938 CC lib/ftl/mngt/ftl_mngt_md.o 00:22:10.197 CC lib/iscsi/iscsi_rpc.o 00:22:10.197 CC lib/ftl/mngt/ftl_mngt_misc.o 00:22:10.197 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:22:10.197 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:22:10.197 CC lib/ftl/mngt/ftl_mngt_band.o 00:22:10.197 CC lib/iscsi/task.o 00:22:10.197 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:22:10.455 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:22:10.455 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:22:10.455 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:22:10.455 CC lib/ftl/utils/ftl_conf.o 00:22:10.456 CC lib/ftl/utils/ftl_md.o 00:22:10.456 CC lib/ftl/utils/ftl_mempool.o 00:22:10.456 CC lib/ftl/utils/ftl_bitmap.o 00:22:10.456 CC lib/ftl/utils/ftl_property.o 00:22:10.456 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:22:10.456 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:22:10.456 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:22:10.714 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:22:10.714 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:22:10.714 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:22:10.714 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:22:10.714 CC lib/ftl/upgrade/ftl_sb_v3.o 00:22:10.714 CC lib/ftl/upgrade/ftl_sb_v5.o 00:22:10.714 CC lib/ftl/nvc/ftl_nvc_dev.o 00:22:10.714 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:22:10.973 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:22:10.973 LIB libspdk_iscsi.a 00:22:10.973 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:22:10.973 CC lib/ftl/base/ftl_base_dev.o 00:22:10.973 CC lib/ftl/base/ftl_base_bdev.o 00:22:10.973 CC lib/ftl/ftl_trace.o 00:22:10.973 SO libspdk_iscsi.so.8.0 00:22:11.231 SYMLINK libspdk_iscsi.so 00:22:11.231 LIB libspdk_ftl.a 00:22:11.231 LIB libspdk_nvmf.a 00:22:11.490 SO libspdk_nvmf.so.20.0 00:22:11.490 SO libspdk_ftl.so.9.0 00:22:11.749 SYMLINK libspdk_nvmf.so 00:22:11.749 SYMLINK libspdk_ftl.so 00:22:12.317 CC module/env_dpdk/env_dpdk_rpc.o 00:22:12.317 CC module/keyring/file/keyring.o 00:22:12.317 CC module/accel/error/accel_error.o 00:22:12.317 CC module/scheduler/dynamic/scheduler_dynamic.o 00:22:12.317 CC module/keyring/linux/keyring.o 00:22:12.317 CC module/fsdev/aio/fsdev_aio.o 00:22:12.317 CC module/accel/ioat/accel_ioat.o 00:22:12.317 CC module/accel/dsa/accel_dsa.o 00:22:12.317 CC module/sock/posix/posix.o 00:22:12.317 CC module/blob/bdev/blob_bdev.o 00:22:12.317 LIB libspdk_env_dpdk_rpc.a 00:22:12.317 SO libspdk_env_dpdk_rpc.so.6.0 00:22:12.317 SYMLINK libspdk_env_dpdk_rpc.so 00:22:12.317 CC module/keyring/file/keyring_rpc.o 00:22:12.317 CC module/accel/ioat/accel_ioat_rpc.o 00:22:12.317 CC module/keyring/linux/keyring_rpc.o 00:22:12.576 CC module/accel/error/accel_error_rpc.o 00:22:12.576 LIB libspdk_scheduler_dynamic.a 00:22:12.576 SO libspdk_scheduler_dynamic.so.4.0 00:22:12.576 LIB libspdk_keyring_linux.a 00:22:12.576 LIB libspdk_keyring_file.a 00:22:12.576 LIB libspdk_accel_ioat.a 00:22:12.576 CC module/accel/dsa/accel_dsa_rpc.o 00:22:12.576 SYMLINK libspdk_scheduler_dynamic.so 00:22:12.576 SO libspdk_keyring_file.so.2.0 00:22:12.576 SO libspdk_keyring_linux.so.1.0 00:22:12.577 SO libspdk_accel_ioat.so.6.0 00:22:12.577 LIB libspdk_blob_bdev.a 00:22:12.577 LIB libspdk_accel_error.a 00:22:12.577 SO libspdk_blob_bdev.so.12.0 00:22:12.577 CC module/fsdev/aio/fsdev_aio_rpc.o 00:22:12.577 SYMLINK libspdk_keyring_file.so 00:22:12.577 SYMLINK libspdk_accel_ioat.so 00:22:12.577 SYMLINK libspdk_keyring_linux.so 00:22:12.577 SO libspdk_accel_error.so.2.0 00:22:12.577 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:22:12.577 CC module/fsdev/aio/linux_aio_mgr.o 00:22:12.577 SYMLINK libspdk_blob_bdev.so 00:22:12.577 SYMLINK libspdk_accel_error.so 00:22:12.577 LIB libspdk_accel_dsa.a 00:22:12.836 SO libspdk_accel_dsa.so.5.0 00:22:12.836 CC module/accel/iaa/accel_iaa.o 00:22:12.836 SYMLINK libspdk_accel_dsa.so 00:22:12.836 CC module/accel/iaa/accel_iaa_rpc.o 00:22:12.836 LIB libspdk_scheduler_dpdk_governor.a 00:22:12.836 CC module/scheduler/gscheduler/gscheduler.o 00:22:12.836 SO libspdk_scheduler_dpdk_governor.so.4.0 00:22:12.836 SYMLINK libspdk_scheduler_dpdk_governor.so 00:22:12.836 CC module/bdev/delay/vbdev_delay.o 00:22:12.836 CC module/bdev/delay/vbdev_delay_rpc.o 00:22:12.836 CC module/bdev/error/vbdev_error.o 00:22:12.836 CC module/blobfs/bdev/blobfs_bdev.o 00:22:12.836 LIB libspdk_scheduler_gscheduler.a 00:22:12.836 LIB libspdk_accel_iaa.a 00:22:13.095 SO libspdk_scheduler_gscheduler.so.4.0 00:22:13.095 LIB libspdk_fsdev_aio.a 00:22:13.095 SO libspdk_accel_iaa.so.3.0 00:22:13.095 CC module/bdev/gpt/gpt.o 00:22:13.095 SO libspdk_fsdev_aio.so.1.0 00:22:13.095 CC module/bdev/lvol/vbdev_lvol.o 00:22:13.095 LIB libspdk_sock_posix.a 00:22:13.095 SYMLINK libspdk_scheduler_gscheduler.so 00:22:13.095 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:22:13.095 SYMLINK libspdk_accel_iaa.so 00:22:13.095 SO libspdk_sock_posix.so.6.0 00:22:13.095 CC module/bdev/gpt/vbdev_gpt.o 00:22:13.095 SYMLINK libspdk_fsdev_aio.so 00:22:13.095 CC module/bdev/error/vbdev_error_rpc.o 00:22:13.095 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:22:13.095 SYMLINK libspdk_sock_posix.so 00:22:13.353 LIB libspdk_bdev_delay.a 00:22:13.353 LIB libspdk_bdev_error.a 00:22:13.353 LIB libspdk_blobfs_bdev.a 00:22:13.353 SO libspdk_bdev_delay.so.6.0 00:22:13.353 SO libspdk_bdev_error.so.6.0 00:22:13.353 CC module/bdev/malloc/bdev_malloc.o 00:22:13.353 SO libspdk_blobfs_bdev.so.6.0 00:22:13.353 CC module/bdev/null/bdev_null.o 00:22:13.353 LIB libspdk_bdev_gpt.a 00:22:13.353 CC module/bdev/nvme/bdev_nvme.o 00:22:13.353 CC module/bdev/passthru/vbdev_passthru.o 00:22:13.353 SYMLINK libspdk_bdev_error.so 00:22:13.353 SO libspdk_bdev_gpt.so.6.0 00:22:13.353 SYMLINK libspdk_bdev_delay.so 00:22:13.353 CC module/bdev/nvme/bdev_nvme_rpc.o 00:22:13.353 CC module/bdev/nvme/nvme_rpc.o 00:22:13.353 SYMLINK libspdk_blobfs_bdev.so 00:22:13.353 CC module/bdev/nvme/bdev_mdns_client.o 00:22:13.353 CC module/bdev/nvme/vbdev_opal.o 00:22:13.353 SYMLINK libspdk_bdev_gpt.so 00:22:13.353 CC module/bdev/nvme/vbdev_opal_rpc.o 00:22:13.611 LIB libspdk_bdev_lvol.a 00:22:13.611 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:22:13.611 SO libspdk_bdev_lvol.so.6.0 00:22:13.611 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:22:13.611 CC module/bdev/null/bdev_null_rpc.o 00:22:13.611 SYMLINK libspdk_bdev_lvol.so 00:22:13.611 CC module/bdev/malloc/bdev_malloc_rpc.o 00:22:13.611 LIB libspdk_bdev_passthru.a 00:22:13.870 SO libspdk_bdev_passthru.so.6.0 00:22:13.870 LIB libspdk_bdev_null.a 00:22:13.870 SYMLINK libspdk_bdev_passthru.so 00:22:13.870 CC module/bdev/split/vbdev_split.o 00:22:13.870 CC module/bdev/raid/bdev_raid.o 00:22:13.870 SO libspdk_bdev_null.so.6.0 00:22:13.870 LIB libspdk_bdev_malloc.a 00:22:13.870 CC module/bdev/zone_block/vbdev_zone_block.o 00:22:13.870 CC module/bdev/aio/bdev_aio.o 00:22:13.870 SO libspdk_bdev_malloc.so.6.0 00:22:13.870 SYMLINK libspdk_bdev_null.so 00:22:13.870 SYMLINK libspdk_bdev_malloc.so 00:22:13.870 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:22:13.870 CC module/bdev/iscsi/bdev_iscsi.o 00:22:13.870 CC module/bdev/ftl/bdev_ftl.o 00:22:14.129 CC module/bdev/aio/bdev_aio_rpc.o 00:22:14.129 CC module/bdev/split/vbdev_split_rpc.o 00:22:14.129 CC module/bdev/virtio/bdev_virtio_scsi.o 00:22:14.129 CC module/bdev/virtio/bdev_virtio_blk.o 00:22:14.129 LIB libspdk_bdev_split.a 00:22:14.129 LIB libspdk_bdev_zone_block.a 00:22:14.129 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:22:14.129 LIB libspdk_bdev_aio.a 00:22:14.129 SO libspdk_bdev_split.so.6.0 00:22:14.129 SO libspdk_bdev_zone_block.so.6.0 00:22:14.387 SO libspdk_bdev_aio.so.6.0 00:22:14.387 CC module/bdev/ftl/bdev_ftl_rpc.o 00:22:14.387 SYMLINK libspdk_bdev_split.so 00:22:14.387 SYMLINK libspdk_bdev_aio.so 00:22:14.387 SYMLINK libspdk_bdev_zone_block.so 00:22:14.387 CC module/bdev/raid/bdev_raid_rpc.o 00:22:14.387 CC module/bdev/virtio/bdev_virtio_rpc.o 00:22:14.387 CC module/bdev/raid/bdev_raid_sb.o 00:22:14.387 CC module/bdev/raid/raid0.o 00:22:14.387 LIB libspdk_bdev_iscsi.a 00:22:14.387 SO libspdk_bdev_iscsi.so.6.0 00:22:14.387 CC module/bdev/raid/raid1.o 00:22:14.387 SYMLINK libspdk_bdev_iscsi.so 00:22:14.387 CC module/bdev/raid/concat.o 00:22:14.645 LIB libspdk_bdev_ftl.a 00:22:14.645 CC module/bdev/raid/raid5f.o 00:22:14.645 SO libspdk_bdev_ftl.so.6.0 00:22:14.645 LIB libspdk_bdev_virtio.a 00:22:14.645 SYMLINK libspdk_bdev_ftl.so 00:22:14.645 SO libspdk_bdev_virtio.so.6.0 00:22:14.645 SYMLINK libspdk_bdev_virtio.so 00:22:15.212 LIB libspdk_bdev_raid.a 00:22:15.212 SO libspdk_bdev_raid.so.6.0 00:22:15.212 SYMLINK libspdk_bdev_raid.so 00:22:16.149 LIB libspdk_bdev_nvme.a 00:22:16.149 SO libspdk_bdev_nvme.so.7.1 00:22:16.408 SYMLINK libspdk_bdev_nvme.so 00:22:16.976 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:22:16.976 CC module/event/subsystems/keyring/keyring.o 00:22:16.976 CC module/event/subsystems/iobuf/iobuf.o 00:22:16.976 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:22:16.976 CC module/event/subsystems/scheduler/scheduler.o 00:22:16.976 CC module/event/subsystems/fsdev/fsdev.o 00:22:16.976 CC module/event/subsystems/sock/sock.o 00:22:16.976 CC module/event/subsystems/vmd/vmd_rpc.o 00:22:16.976 CC module/event/subsystems/vmd/vmd.o 00:22:17.234 LIB libspdk_event_vhost_blk.a 00:22:17.234 LIB libspdk_event_fsdev.a 00:22:17.234 LIB libspdk_event_scheduler.a 00:22:17.234 LIB libspdk_event_keyring.a 00:22:17.234 LIB libspdk_event_sock.a 00:22:17.234 SO libspdk_event_vhost_blk.so.3.0 00:22:17.234 LIB libspdk_event_iobuf.a 00:22:17.234 SO libspdk_event_keyring.so.1.0 00:22:17.234 SO libspdk_event_fsdev.so.1.0 00:22:17.234 SO libspdk_event_scheduler.so.4.0 00:22:17.234 SO libspdk_event_sock.so.5.0 00:22:17.234 LIB libspdk_event_vmd.a 00:22:17.234 SO libspdk_event_iobuf.so.3.0 00:22:17.234 SYMLINK libspdk_event_vhost_blk.so 00:22:17.234 SYMLINK libspdk_event_keyring.so 00:22:17.234 SYMLINK libspdk_event_fsdev.so 00:22:17.234 SO libspdk_event_vmd.so.6.0 00:22:17.234 SYMLINK libspdk_event_scheduler.so 00:22:17.234 SYMLINK libspdk_event_sock.so 00:22:17.234 SYMLINK libspdk_event_iobuf.so 00:22:17.234 SYMLINK libspdk_event_vmd.so 00:22:17.493 CC module/event/subsystems/accel/accel.o 00:22:17.753 LIB libspdk_event_accel.a 00:22:17.753 SO libspdk_event_accel.so.6.0 00:22:17.753 SYMLINK libspdk_event_accel.so 00:22:18.321 CC module/event/subsystems/bdev/bdev.o 00:22:18.321 LIB libspdk_event_bdev.a 00:22:18.580 SO libspdk_event_bdev.so.6.0 00:22:18.580 SYMLINK libspdk_event_bdev.so 00:22:18.838 CC module/event/subsystems/scsi/scsi.o 00:22:18.838 CC module/event/subsystems/ublk/ublk.o 00:22:18.838 CC module/event/subsystems/nbd/nbd.o 00:22:18.838 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:22:18.838 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:22:19.128 LIB libspdk_event_ublk.a 00:22:19.128 LIB libspdk_event_scsi.a 00:22:19.128 LIB libspdk_event_nbd.a 00:22:19.128 SO libspdk_event_ublk.so.3.0 00:22:19.128 SO libspdk_event_scsi.so.6.0 00:22:19.128 SO libspdk_event_nbd.so.6.0 00:22:19.128 SYMLINK libspdk_event_ublk.so 00:22:19.128 SYMLINK libspdk_event_scsi.so 00:22:19.128 SYMLINK libspdk_event_nbd.so 00:22:19.128 LIB libspdk_event_nvmf.a 00:22:19.128 SO libspdk_event_nvmf.so.6.0 00:22:19.398 SYMLINK libspdk_event_nvmf.so 00:22:19.398 CC module/event/subsystems/iscsi/iscsi.o 00:22:19.398 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:22:19.657 LIB libspdk_event_iscsi.a 00:22:19.657 LIB libspdk_event_vhost_scsi.a 00:22:19.657 SO libspdk_event_iscsi.so.6.0 00:22:19.657 SO libspdk_event_vhost_scsi.so.3.0 00:22:19.657 SYMLINK libspdk_event_iscsi.so 00:22:19.657 SYMLINK libspdk_event_vhost_scsi.so 00:22:19.917 SO libspdk.so.6.0 00:22:19.917 SYMLINK libspdk.so 00:22:20.177 TEST_HEADER include/spdk/accel.h 00:22:20.177 TEST_HEADER include/spdk/accel_module.h 00:22:20.177 CXX app/trace/trace.o 00:22:20.177 CC app/trace_record/trace_record.o 00:22:20.177 TEST_HEADER include/spdk/assert.h 00:22:20.177 TEST_HEADER include/spdk/barrier.h 00:22:20.437 TEST_HEADER include/spdk/base64.h 00:22:20.437 TEST_HEADER include/spdk/bdev.h 00:22:20.437 TEST_HEADER include/spdk/bdev_module.h 00:22:20.437 TEST_HEADER include/spdk/bdev_zone.h 00:22:20.437 TEST_HEADER include/spdk/bit_array.h 00:22:20.437 TEST_HEADER include/spdk/bit_pool.h 00:22:20.437 TEST_HEADER include/spdk/blob_bdev.h 00:22:20.437 TEST_HEADER include/spdk/blobfs_bdev.h 00:22:20.437 TEST_HEADER include/spdk/blobfs.h 00:22:20.437 TEST_HEADER include/spdk/blob.h 00:22:20.437 TEST_HEADER include/spdk/conf.h 00:22:20.437 TEST_HEADER include/spdk/config.h 00:22:20.437 TEST_HEADER include/spdk/cpuset.h 00:22:20.437 TEST_HEADER include/spdk/crc16.h 00:22:20.437 TEST_HEADER include/spdk/crc32.h 00:22:20.437 TEST_HEADER include/spdk/crc64.h 00:22:20.437 CC examples/interrupt_tgt/interrupt_tgt.o 00:22:20.437 TEST_HEADER include/spdk/dif.h 00:22:20.437 TEST_HEADER include/spdk/dma.h 00:22:20.437 TEST_HEADER include/spdk/endian.h 00:22:20.437 TEST_HEADER include/spdk/env_dpdk.h 00:22:20.437 TEST_HEADER include/spdk/env.h 00:22:20.437 TEST_HEADER include/spdk/event.h 00:22:20.437 TEST_HEADER include/spdk/fd_group.h 00:22:20.437 TEST_HEADER include/spdk/fd.h 00:22:20.437 TEST_HEADER include/spdk/file.h 00:22:20.437 TEST_HEADER include/spdk/fsdev.h 00:22:20.437 TEST_HEADER include/spdk/fsdev_module.h 00:22:20.437 TEST_HEADER include/spdk/ftl.h 00:22:20.437 TEST_HEADER include/spdk/fuse_dispatcher.h 00:22:20.437 TEST_HEADER include/spdk/gpt_spec.h 00:22:20.437 TEST_HEADER include/spdk/hexlify.h 00:22:20.437 TEST_HEADER include/spdk/histogram_data.h 00:22:20.437 CC test/thread/poller_perf/poller_perf.o 00:22:20.437 TEST_HEADER include/spdk/idxd.h 00:22:20.437 TEST_HEADER include/spdk/idxd_spec.h 00:22:20.437 TEST_HEADER include/spdk/init.h 00:22:20.437 TEST_HEADER include/spdk/ioat.h 00:22:20.437 CC examples/util/zipf/zipf.o 00:22:20.437 CC examples/ioat/perf/perf.o 00:22:20.437 TEST_HEADER include/spdk/ioat_spec.h 00:22:20.437 TEST_HEADER include/spdk/iscsi_spec.h 00:22:20.437 TEST_HEADER include/spdk/json.h 00:22:20.437 TEST_HEADER include/spdk/jsonrpc.h 00:22:20.437 TEST_HEADER include/spdk/keyring.h 00:22:20.437 TEST_HEADER include/spdk/keyring_module.h 00:22:20.437 TEST_HEADER include/spdk/likely.h 00:22:20.437 TEST_HEADER include/spdk/log.h 00:22:20.437 TEST_HEADER include/spdk/lvol.h 00:22:20.437 TEST_HEADER include/spdk/md5.h 00:22:20.437 TEST_HEADER include/spdk/memory.h 00:22:20.437 TEST_HEADER include/spdk/mmio.h 00:22:20.437 TEST_HEADER include/spdk/nbd.h 00:22:20.437 TEST_HEADER include/spdk/net.h 00:22:20.437 TEST_HEADER include/spdk/notify.h 00:22:20.437 TEST_HEADER include/spdk/nvme.h 00:22:20.437 TEST_HEADER include/spdk/nvme_intel.h 00:22:20.437 TEST_HEADER include/spdk/nvme_ocssd.h 00:22:20.437 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:22:20.437 TEST_HEADER include/spdk/nvme_spec.h 00:22:20.437 TEST_HEADER include/spdk/nvme_zns.h 00:22:20.437 CC test/dma/test_dma/test_dma.o 00:22:20.437 TEST_HEADER include/spdk/nvmf_cmd.h 00:22:20.437 CC test/app/bdev_svc/bdev_svc.o 00:22:20.437 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:22:20.437 TEST_HEADER include/spdk/nvmf.h 00:22:20.437 TEST_HEADER include/spdk/nvmf_spec.h 00:22:20.437 TEST_HEADER include/spdk/nvmf_transport.h 00:22:20.437 TEST_HEADER include/spdk/opal.h 00:22:20.437 TEST_HEADER include/spdk/opal_spec.h 00:22:20.437 CC test/env/mem_callbacks/mem_callbacks.o 00:22:20.437 TEST_HEADER include/spdk/pci_ids.h 00:22:20.437 TEST_HEADER include/spdk/pipe.h 00:22:20.437 TEST_HEADER include/spdk/queue.h 00:22:20.437 TEST_HEADER include/spdk/reduce.h 00:22:20.437 TEST_HEADER include/spdk/rpc.h 00:22:20.437 TEST_HEADER include/spdk/scheduler.h 00:22:20.437 TEST_HEADER include/spdk/scsi.h 00:22:20.437 TEST_HEADER include/spdk/scsi_spec.h 00:22:20.437 TEST_HEADER include/spdk/sock.h 00:22:20.437 TEST_HEADER include/spdk/stdinc.h 00:22:20.437 TEST_HEADER include/spdk/string.h 00:22:20.437 TEST_HEADER include/spdk/thread.h 00:22:20.437 TEST_HEADER include/spdk/trace.h 00:22:20.437 TEST_HEADER include/spdk/trace_parser.h 00:22:20.437 TEST_HEADER include/spdk/tree.h 00:22:20.437 TEST_HEADER include/spdk/ublk.h 00:22:20.437 TEST_HEADER include/spdk/util.h 00:22:20.437 TEST_HEADER include/spdk/uuid.h 00:22:20.437 TEST_HEADER include/spdk/version.h 00:22:20.437 TEST_HEADER include/spdk/vfio_user_pci.h 00:22:20.437 TEST_HEADER include/spdk/vfio_user_spec.h 00:22:20.437 TEST_HEADER include/spdk/vhost.h 00:22:20.437 TEST_HEADER include/spdk/vmd.h 00:22:20.437 TEST_HEADER include/spdk/xor.h 00:22:20.437 TEST_HEADER include/spdk/zipf.h 00:22:20.437 CXX test/cpp_headers/accel.o 00:22:20.437 LINK interrupt_tgt 00:22:20.437 LINK zipf 00:22:20.437 LINK poller_perf 00:22:20.437 LINK spdk_trace_record 00:22:20.697 LINK bdev_svc 00:22:20.697 LINK ioat_perf 00:22:20.697 LINK spdk_trace 00:22:20.697 CXX test/cpp_headers/accel_module.o 00:22:20.697 CXX test/cpp_headers/assert.o 00:22:20.697 CXX test/cpp_headers/barrier.o 00:22:20.697 CXX test/cpp_headers/base64.o 00:22:20.956 CC examples/ioat/verify/verify.o 00:22:20.956 CXX test/cpp_headers/bdev.o 00:22:20.956 CC examples/thread/thread/thread_ex.o 00:22:20.956 CC test/app/histogram_perf/histogram_perf.o 00:22:20.956 CC test/app/jsoncat/jsoncat.o 00:22:20.956 LINK test_dma 00:22:20.956 CC test/app/stub/stub.o 00:22:20.956 LINK mem_callbacks 00:22:20.956 CC app/nvmf_tgt/nvmf_main.o 00:22:20.956 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:22:20.956 CXX test/cpp_headers/bdev_module.o 00:22:20.956 LINK histogram_perf 00:22:20.956 LINK jsoncat 00:22:20.956 LINK verify 00:22:21.216 LINK stub 00:22:21.216 CXX test/cpp_headers/bdev_zone.o 00:22:21.216 LINK thread 00:22:21.216 LINK nvmf_tgt 00:22:21.216 CC test/env/vtophys/vtophys.o 00:22:21.216 CXX test/cpp_headers/bit_array.o 00:22:21.216 CXX test/cpp_headers/bit_pool.o 00:22:21.216 CXX test/cpp_headers/blob_bdev.o 00:22:21.216 LINK vtophys 00:22:21.216 CXX test/cpp_headers/blobfs_bdev.o 00:22:21.216 CC app/iscsi_tgt/iscsi_tgt.o 00:22:21.476 CC app/spdk_lspci/spdk_lspci.o 00:22:21.476 CC app/spdk_tgt/spdk_tgt.o 00:22:21.476 CXX test/cpp_headers/blobfs.o 00:22:21.476 LINK nvme_fuzz 00:22:21.476 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:22:21.476 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:22:21.476 CC examples/sock/hello_world/hello_sock.o 00:22:21.476 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:22:21.476 LINK iscsi_tgt 00:22:21.476 LINK spdk_lspci 00:22:21.476 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:22:21.476 CXX test/cpp_headers/blob.o 00:22:21.476 LINK spdk_tgt 00:22:21.735 CC test/rpc_client/rpc_client_test.o 00:22:21.735 CC app/spdk_nvme_perf/perf.o 00:22:21.735 LINK env_dpdk_post_init 00:22:21.735 CXX test/cpp_headers/conf.o 00:22:21.735 LINK hello_sock 00:22:21.735 CC app/spdk_nvme_identify/identify.o 00:22:21.735 LINK rpc_client_test 00:22:21.735 CC app/spdk_nvme_discover/discovery_aer.o 00:22:21.994 CC app/spdk_top/spdk_top.o 00:22:21.994 CXX test/cpp_headers/config.o 00:22:21.994 CXX test/cpp_headers/cpuset.o 00:22:21.994 CC test/env/memory/memory_ut.o 00:22:21.994 LINK vhost_fuzz 00:22:21.994 LINK spdk_nvme_discover 00:22:21.994 CXX test/cpp_headers/crc16.o 00:22:22.254 CC examples/vmd/lsvmd/lsvmd.o 00:22:22.254 CC examples/idxd/perf/perf.o 00:22:22.254 CXX test/cpp_headers/crc32.o 00:22:22.254 CXX test/cpp_headers/crc64.o 00:22:22.254 LINK lsvmd 00:22:22.254 CXX test/cpp_headers/dif.o 00:22:22.513 CC app/vhost/vhost.o 00:22:22.513 CC app/spdk_dd/spdk_dd.o 00:22:22.513 CXX test/cpp_headers/dma.o 00:22:22.513 LINK idxd_perf 00:22:22.513 CC examples/vmd/led/led.o 00:22:22.513 LINK spdk_nvme_perf 00:22:22.513 LINK vhost 00:22:22.772 CXX test/cpp_headers/endian.o 00:22:22.772 LINK led 00:22:22.772 LINK spdk_nvme_identify 00:22:22.772 LINK spdk_top 00:22:22.772 CXX test/cpp_headers/env_dpdk.o 00:22:22.772 CC examples/fsdev/hello_world/hello_fsdev.o 00:22:22.772 LINK spdk_dd 00:22:22.772 CXX test/cpp_headers/env.o 00:22:22.772 CC app/fio/nvme/fio_plugin.o 00:22:23.035 CC test/env/pci/pci_ut.o 00:22:23.035 CC test/accel/dif/dif.o 00:22:23.035 CXX test/cpp_headers/event.o 00:22:23.035 CC app/fio/bdev/fio_plugin.o 00:22:23.035 LINK memory_ut 00:22:23.035 LINK hello_fsdev 00:22:23.295 CC test/blobfs/mkfs/mkfs.o 00:22:23.295 CC test/event/event_perf/event_perf.o 00:22:23.295 CXX test/cpp_headers/fd_group.o 00:22:23.295 LINK iscsi_fuzz 00:22:23.295 LINK pci_ut 00:22:23.295 LINK event_perf 00:22:23.295 CXX test/cpp_headers/fd.o 00:22:23.295 CC test/event/reactor/reactor.o 00:22:23.295 LINK mkfs 00:22:23.555 LINK spdk_nvme 00:22:23.555 CC examples/accel/perf/accel_perf.o 00:22:23.555 LINK reactor 00:22:23.555 CXX test/cpp_headers/file.o 00:22:23.555 CXX test/cpp_headers/fsdev.o 00:22:23.555 CXX test/cpp_headers/fsdev_module.o 00:22:23.555 CC test/event/reactor_perf/reactor_perf.o 00:22:23.555 LINK spdk_bdev 00:22:23.814 CC test/lvol/esnap/esnap.o 00:22:23.814 LINK dif 00:22:23.814 CXX test/cpp_headers/ftl.o 00:22:23.814 CC test/nvme/aer/aer.o 00:22:23.814 LINK reactor_perf 00:22:23.814 CC test/nvme/reset/reset.o 00:22:23.814 CC test/nvme/sgl/sgl.o 00:22:23.814 CC test/nvme/e2edp/nvme_dp.o 00:22:23.814 CXX test/cpp_headers/fuse_dispatcher.o 00:22:23.814 CC examples/blob/hello_world/hello_blob.o 00:22:24.073 CC test/event/app_repeat/app_repeat.o 00:22:24.073 LINK reset 00:22:24.073 LINK aer 00:22:24.073 CC test/event/scheduler/scheduler.o 00:22:24.073 CXX test/cpp_headers/gpt_spec.o 00:22:24.073 LINK sgl 00:22:24.073 LINK accel_perf 00:22:24.073 LINK app_repeat 00:22:24.073 LINK nvme_dp 00:22:24.073 LINK hello_blob 00:22:24.073 CXX test/cpp_headers/hexlify.o 00:22:24.333 LINK scheduler 00:22:24.333 CC examples/blob/cli/blobcli.o 00:22:24.333 CC test/nvme/overhead/overhead.o 00:22:24.333 CC test/nvme/err_injection/err_injection.o 00:22:24.333 CC test/nvme/startup/startup.o 00:22:24.333 CC test/nvme/reserve/reserve.o 00:22:24.333 CXX test/cpp_headers/histogram_data.o 00:22:24.333 CC test/nvme/simple_copy/simple_copy.o 00:22:24.333 CC test/nvme/connect_stress/connect_stress.o 00:22:24.592 LINK err_injection 00:22:24.593 LINK startup 00:22:24.593 CXX test/cpp_headers/idxd.o 00:22:24.593 LINK reserve 00:22:24.593 LINK overhead 00:22:24.593 LINK connect_stress 00:22:24.593 CC test/bdev/bdevio/bdevio.o 00:22:24.593 LINK simple_copy 00:22:24.593 CXX test/cpp_headers/idxd_spec.o 00:22:24.851 CXX test/cpp_headers/init.o 00:22:24.851 CC test/nvme/boot_partition/boot_partition.o 00:22:24.851 CXX test/cpp_headers/ioat.o 00:22:24.851 LINK blobcli 00:22:24.851 CC examples/nvme/hello_world/hello_world.o 00:22:24.851 CC test/nvme/compliance/nvme_compliance.o 00:22:24.851 CXX test/cpp_headers/ioat_spec.o 00:22:24.851 LINK boot_partition 00:22:24.851 CC examples/nvme/reconnect/reconnect.o 00:22:24.851 CC examples/nvme/nvme_manage/nvme_manage.o 00:22:25.110 CC examples/bdev/hello_world/hello_bdev.o 00:22:25.110 LINK bdevio 00:22:25.110 CXX test/cpp_headers/iscsi_spec.o 00:22:25.110 LINK hello_world 00:22:25.110 CC examples/bdev/bdevperf/bdevperf.o 00:22:25.110 CC test/nvme/fused_ordering/fused_ordering.o 00:22:25.110 LINK nvme_compliance 00:22:25.110 CXX test/cpp_headers/json.o 00:22:25.369 LINK hello_bdev 00:22:25.369 LINK reconnect 00:22:25.369 CC examples/nvme/arbitration/arbitration.o 00:22:25.369 CC test/nvme/doorbell_aers/doorbell_aers.o 00:22:25.369 LINK fused_ordering 00:22:25.369 CXX test/cpp_headers/jsonrpc.o 00:22:25.369 CC test/nvme/fdp/fdp.o 00:22:25.369 CXX test/cpp_headers/keyring.o 00:22:25.369 LINK nvme_manage 00:22:25.628 LINK doorbell_aers 00:22:25.628 CXX test/cpp_headers/keyring_module.o 00:22:25.628 CC examples/nvme/hotplug/hotplug.o 00:22:25.628 CC test/nvme/cuse/cuse.o 00:22:25.628 LINK arbitration 00:22:25.628 CC examples/nvme/cmb_copy/cmb_copy.o 00:22:25.628 CC examples/nvme/abort/abort.o 00:22:25.628 CXX test/cpp_headers/likely.o 00:22:25.887 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:22:25.887 LINK fdp 00:22:25.887 CXX test/cpp_headers/log.o 00:22:25.887 LINK hotplug 00:22:25.887 CXX test/cpp_headers/lvol.o 00:22:25.887 LINK cmb_copy 00:22:25.887 CXX test/cpp_headers/md5.o 00:22:25.887 LINK pmr_persistence 00:22:25.887 LINK bdevperf 00:22:25.887 CXX test/cpp_headers/memory.o 00:22:25.887 CXX test/cpp_headers/mmio.o 00:22:26.146 CXX test/cpp_headers/nbd.o 00:22:26.146 CXX test/cpp_headers/net.o 00:22:26.146 CXX test/cpp_headers/notify.o 00:22:26.146 CXX test/cpp_headers/nvme.o 00:22:26.146 LINK abort 00:22:26.146 CXX test/cpp_headers/nvme_intel.o 00:22:26.146 CXX test/cpp_headers/nvme_ocssd.o 00:22:26.146 CXX test/cpp_headers/nvme_ocssd_spec.o 00:22:26.146 CXX test/cpp_headers/nvme_spec.o 00:22:26.146 CXX test/cpp_headers/nvme_zns.o 00:22:26.146 CXX test/cpp_headers/nvmf_cmd.o 00:22:26.146 CXX test/cpp_headers/nvmf_fc_spec.o 00:22:26.405 CXX test/cpp_headers/nvmf.o 00:22:26.405 CXX test/cpp_headers/nvmf_spec.o 00:22:26.405 CXX test/cpp_headers/nvmf_transport.o 00:22:26.405 CXX test/cpp_headers/opal.o 00:22:26.405 CXX test/cpp_headers/opal_spec.o 00:22:26.405 CXX test/cpp_headers/pci_ids.o 00:22:26.405 CXX test/cpp_headers/pipe.o 00:22:26.405 CXX test/cpp_headers/queue.o 00:22:26.405 CXX test/cpp_headers/reduce.o 00:22:26.405 CXX test/cpp_headers/rpc.o 00:22:26.405 CC examples/nvmf/nvmf/nvmf.o 00:22:26.405 CXX test/cpp_headers/scheduler.o 00:22:26.664 CXX test/cpp_headers/scsi.o 00:22:26.664 CXX test/cpp_headers/scsi_spec.o 00:22:26.664 CXX test/cpp_headers/sock.o 00:22:26.664 CXX test/cpp_headers/stdinc.o 00:22:26.664 CXX test/cpp_headers/string.o 00:22:26.664 CXX test/cpp_headers/thread.o 00:22:26.664 CXX test/cpp_headers/trace.o 00:22:26.664 CXX test/cpp_headers/trace_parser.o 00:22:26.664 CXX test/cpp_headers/tree.o 00:22:26.664 CXX test/cpp_headers/ublk.o 00:22:26.664 CXX test/cpp_headers/util.o 00:22:26.664 CXX test/cpp_headers/uuid.o 00:22:26.664 CXX test/cpp_headers/version.o 00:22:26.923 CXX test/cpp_headers/vfio_user_pci.o 00:22:26.923 LINK nvmf 00:22:26.923 CXX test/cpp_headers/vfio_user_spec.o 00:22:26.923 CXX test/cpp_headers/vhost.o 00:22:26.923 CXX test/cpp_headers/vmd.o 00:22:26.923 CXX test/cpp_headers/xor.o 00:22:26.923 CXX test/cpp_headers/zipf.o 00:22:26.923 LINK cuse 00:22:29.463 LINK esnap 00:22:30.032 00:22:30.032 real 1m20.178s 00:22:30.032 user 6m56.186s 00:22:30.032 sys 1m49.155s 00:22:30.032 18:21:00 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:22:30.032 18:21:00 make -- common/autotest_common.sh@10 -- $ set +x 00:22:30.032 ************************************ 00:22:30.032 END TEST make 00:22:30.032 ************************************ 00:22:30.032 18:21:00 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:22:30.032 18:21:00 -- pm/common@29 -- $ signal_monitor_resources TERM 00:22:30.032 18:21:00 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:22:30.032 18:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:30.032 18:21:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:30.032 18:21:00 -- pm/common@44 -- $ pid=5256 00:22:30.032 18:21:00 -- pm/common@50 -- $ kill -TERM 5256 00:22:30.032 18:21:00 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:30.032 18:21:00 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:30.032 18:21:00 -- pm/common@44 -- $ pid=5258 00:22:30.032 18:21:00 -- pm/common@50 -- $ kill -TERM 5258 00:22:30.032 18:21:00 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:22:30.032 18:21:00 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:22:30.032 18:21:00 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:30.032 18:21:00 -- common/autotest_common.sh@1711 -- # lcov --version 00:22:30.032 18:21:00 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:30.341 18:21:01 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:30.341 18:21:01 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:30.341 18:21:01 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:30.341 18:21:01 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:30.341 18:21:01 -- scripts/common.sh@336 -- # IFS=.-: 00:22:30.341 18:21:01 -- scripts/common.sh@336 -- # read -ra ver1 00:22:30.341 18:21:01 -- scripts/common.sh@337 -- # IFS=.-: 00:22:30.341 18:21:01 -- scripts/common.sh@337 -- # read -ra ver2 00:22:30.341 18:21:01 -- scripts/common.sh@338 -- # local 'op=<' 00:22:30.341 18:21:01 -- scripts/common.sh@340 -- # ver1_l=2 00:22:30.341 18:21:01 -- scripts/common.sh@341 -- # ver2_l=1 00:22:30.341 18:21:01 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:30.341 18:21:01 -- scripts/common.sh@344 -- # case "$op" in 00:22:30.341 18:21:01 -- scripts/common.sh@345 -- # : 1 00:22:30.341 18:21:01 -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:30.341 18:21:01 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:30.341 18:21:01 -- scripts/common.sh@365 -- # decimal 1 00:22:30.341 18:21:01 -- scripts/common.sh@353 -- # local d=1 00:22:30.341 18:21:01 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:30.341 18:21:01 -- scripts/common.sh@355 -- # echo 1 00:22:30.341 18:21:01 -- scripts/common.sh@365 -- # ver1[v]=1 00:22:30.341 18:21:01 -- scripts/common.sh@366 -- # decimal 2 00:22:30.341 18:21:01 -- scripts/common.sh@353 -- # local d=2 00:22:30.342 18:21:01 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:30.342 18:21:01 -- scripts/common.sh@355 -- # echo 2 00:22:30.342 18:21:01 -- scripts/common.sh@366 -- # ver2[v]=2 00:22:30.342 18:21:01 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:30.342 18:21:01 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:30.342 18:21:01 -- scripts/common.sh@368 -- # return 0 00:22:30.342 18:21:01 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:30.342 18:21:01 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.342 --rc genhtml_branch_coverage=1 00:22:30.342 --rc genhtml_function_coverage=1 00:22:30.342 --rc genhtml_legend=1 00:22:30.342 --rc geninfo_all_blocks=1 00:22:30.342 --rc geninfo_unexecuted_blocks=1 00:22:30.342 00:22:30.342 ' 00:22:30.342 18:21:01 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.342 --rc genhtml_branch_coverage=1 00:22:30.342 --rc genhtml_function_coverage=1 00:22:30.342 --rc genhtml_legend=1 00:22:30.342 --rc geninfo_all_blocks=1 00:22:30.342 --rc geninfo_unexecuted_blocks=1 00:22:30.342 00:22:30.342 ' 00:22:30.342 18:21:01 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.342 --rc genhtml_branch_coverage=1 00:22:30.342 --rc genhtml_function_coverage=1 00:22:30.342 --rc genhtml_legend=1 00:22:30.342 --rc geninfo_all_blocks=1 00:22:30.342 --rc geninfo_unexecuted_blocks=1 00:22:30.342 00:22:30.342 ' 00:22:30.342 18:21:01 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:30.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:30.342 --rc genhtml_branch_coverage=1 00:22:30.342 --rc genhtml_function_coverage=1 00:22:30.342 --rc genhtml_legend=1 00:22:30.342 --rc geninfo_all_blocks=1 00:22:30.342 --rc geninfo_unexecuted_blocks=1 00:22:30.342 00:22:30.342 ' 00:22:30.342 18:21:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:30.342 18:21:01 -- nvmf/common.sh@7 -- # uname -s 00:22:30.342 18:21:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:30.342 18:21:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:30.342 18:21:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:30.342 18:21:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:30.342 18:21:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:30.342 18:21:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:30.342 18:21:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:30.342 18:21:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:30.342 18:21:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:30.342 18:21:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:30.342 18:21:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7c846355-cda6-4b45-925e-50e7b08c3e5f 00:22:30.342 18:21:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=7c846355-cda6-4b45-925e-50e7b08c3e5f 00:22:30.342 18:21:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:30.342 18:21:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:30.342 18:21:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:30.342 18:21:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:30.342 18:21:01 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:30.342 18:21:01 -- scripts/common.sh@15 -- # shopt -s extglob 00:22:30.342 18:21:01 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:30.342 18:21:01 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:30.342 18:21:01 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:30.342 18:21:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.342 18:21:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.342 18:21:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.342 18:21:01 -- paths/export.sh@5 -- # export PATH 00:22:30.342 18:21:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:30.342 18:21:01 -- nvmf/common.sh@51 -- # : 0 00:22:30.342 18:21:01 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:30.342 18:21:01 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:30.342 18:21:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:30.342 18:21:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:30.342 18:21:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:30.342 18:21:01 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:30.342 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:30.342 18:21:01 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:30.342 18:21:01 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:30.342 18:21:01 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:30.342 18:21:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:22:30.342 18:21:01 -- spdk/autotest.sh@32 -- # uname -s 00:22:30.342 18:21:01 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:22:30.342 18:21:01 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:22:30.342 18:21:01 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:22:30.342 18:21:01 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:22:30.342 18:21:01 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:22:30.342 18:21:01 -- spdk/autotest.sh@44 -- # modprobe nbd 00:22:30.342 18:21:01 -- spdk/autotest.sh@46 -- # type -P udevadm 00:22:30.342 18:21:01 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:22:30.342 18:21:01 -- spdk/autotest.sh@48 -- # udevadm_pid=54169 00:22:30.342 18:21:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:22:30.342 18:21:01 -- pm/common@17 -- # local monitor 00:22:30.342 18:21:01 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:22:30.342 18:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:22:30.342 18:21:01 -- pm/common@21 -- # date +%s 00:22:30.342 18:21:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:22:30.342 18:21:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733509261 00:22:30.342 18:21:01 -- pm/common@25 -- # sleep 1 00:22:30.342 18:21:01 -- pm/common@21 -- # date +%s 00:22:30.342 18:21:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733509261 00:22:30.342 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733509261_collect-cpu-load.pm.log 00:22:30.342 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733509261_collect-vmstat.pm.log 00:22:31.279 18:21:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:22:31.279 18:21:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:22:31.279 18:21:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.279 18:21:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.279 18:21:02 -- spdk/autotest.sh@59 -- # create_test_list 00:22:31.279 18:21:02 -- common/autotest_common.sh@752 -- # xtrace_disable 00:22:31.279 18:21:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.279 18:21:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:22:31.538 18:21:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:22:31.538 18:21:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:22:31.538 18:21:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:22:31.538 18:21:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:22:31.538 18:21:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:22:31.538 18:21:02 -- common/autotest_common.sh@1457 -- # uname 00:22:31.538 18:21:02 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:22:31.538 18:21:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:22:31.538 18:21:02 -- common/autotest_common.sh@1477 -- # uname 00:22:31.538 18:21:02 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:22:31.538 18:21:02 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:22:31.538 18:21:02 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:22:31.538 lcov: LCOV version 1.15 00:22:31.538 18:21:02 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:22:46.423 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:22:46.423 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:23:04.510 18:21:32 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:23:04.510 18:21:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:04.510 18:21:32 -- common/autotest_common.sh@10 -- # set +x 00:23:04.510 18:21:32 -- spdk/autotest.sh@78 -- # rm -f 00:23:04.510 18:21:32 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:04.510 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:04.510 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:23:04.510 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:23:04.510 18:21:33 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:23:04.510 18:21:33 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:23:04.510 18:21:33 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:23:04.510 18:21:33 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:23:04.510 18:21:33 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:23:04.510 18:21:33 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:23:04.510 18:21:33 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:23:04.510 18:21:33 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:23:04.510 18:21:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:04.510 18:21:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:23:04.510 18:21:33 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:04.510 18:21:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:04.510 18:21:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:04.510 18:21:33 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:23:04.510 18:21:33 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:23:04.510 18:21:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:04.510 18:21:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:23:04.510 18:21:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:04.510 18:21:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:04.510 18:21:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:04.510 18:21:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:04.510 18:21:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:23:04.510 18:21:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:23:04.510 18:21:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:23:04.510 18:21:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:04.510 18:21:33 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:23:04.510 18:21:33 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:23:04.510 18:21:33 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:23:04.510 18:21:33 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:23:04.510 18:21:33 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:04.510 18:21:33 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:23:04.510 18:21:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:23:04.510 18:21:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:23:04.510 18:21:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:23:04.510 18:21:33 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:23:04.510 18:21:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:23:04.510 No valid GPT data, bailing 00:23:04.510 18:21:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:04.510 18:21:33 -- scripts/common.sh@394 -- # pt= 00:23:04.510 18:21:33 -- scripts/common.sh@395 -- # return 1 00:23:04.510 18:21:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:23:04.510 1+0 records in 00:23:04.510 1+0 records out 00:23:04.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415494 s, 252 MB/s 00:23:04.510 18:21:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:23:04.510 18:21:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:23:04.510 18:21:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:23:04.510 18:21:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:23:04.510 18:21:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:23:04.510 No valid GPT data, bailing 00:23:04.510 18:21:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:04.510 18:21:33 -- scripts/common.sh@394 -- # pt= 00:23:04.510 18:21:33 -- scripts/common.sh@395 -- # return 1 00:23:04.510 18:21:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:23:04.510 1+0 records in 00:23:04.510 1+0 records out 00:23:04.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583838 s, 180 MB/s 00:23:04.510 18:21:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:23:04.510 18:21:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:23:04.510 18:21:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:23:04.510 18:21:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:23:04.510 18:21:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:23:04.510 No valid GPT data, bailing 00:23:04.510 18:21:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:23:04.510 18:21:33 -- scripts/common.sh@394 -- # pt= 00:23:04.510 18:21:33 -- scripts/common.sh@395 -- # return 1 00:23:04.510 18:21:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:23:04.510 1+0 records in 00:23:04.510 1+0 records out 00:23:04.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440386 s, 238 MB/s 00:23:04.510 18:21:33 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:23:04.510 18:21:33 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:23:04.510 18:21:33 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:23:04.510 18:21:33 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:23:04.510 18:21:33 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:23:04.510 No valid GPT data, bailing 00:23:04.510 18:21:33 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:23:04.510 18:21:33 -- scripts/common.sh@394 -- # pt= 00:23:04.510 18:21:33 -- scripts/common.sh@395 -- # return 1 00:23:04.510 18:21:33 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:23:04.510 1+0 records in 00:23:04.510 1+0 records out 00:23:04.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00408497 s, 257 MB/s 00:23:04.510 18:21:33 -- spdk/autotest.sh@105 -- # sync 00:23:04.510 18:21:33 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:23:04.511 18:21:33 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:23:04.511 18:21:33 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:23:05.453 18:21:36 -- spdk/autotest.sh@111 -- # uname -s 00:23:05.453 18:21:36 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:23:05.453 18:21:36 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:23:05.453 18:21:36 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:23:06.390 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:06.390 Hugepages 00:23:06.390 node hugesize free / total 00:23:06.390 node0 1048576kB 0 / 0 00:23:06.390 node0 2048kB 0 / 0 00:23:06.390 00:23:06.390 Type BDF Vendor Device NUMA Driver Device Block devices 00:23:06.390 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:23:06.649 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:23:06.649 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:23:06.649 18:21:37 -- spdk/autotest.sh@117 -- # uname -s 00:23:06.649 18:21:37 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:23:06.649 18:21:37 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:23:06.649 18:21:37 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:07.588 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:07.588 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:07.588 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:07.847 18:21:38 -- common/autotest_common.sh@1517 -- # sleep 1 00:23:08.786 18:21:39 -- common/autotest_common.sh@1518 -- # bdfs=() 00:23:08.786 18:21:39 -- common/autotest_common.sh@1518 -- # local bdfs 00:23:08.786 18:21:39 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:23:08.786 18:21:39 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:23:08.786 18:21:39 -- common/autotest_common.sh@1498 -- # bdfs=() 00:23:08.786 18:21:39 -- common/autotest_common.sh@1498 -- # local bdfs 00:23:08.786 18:21:39 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:08.786 18:21:39 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:08.786 18:21:39 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:23:08.786 18:21:39 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:23:08.786 18:21:39 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:08.786 18:21:39 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:09.356 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:09.356 Waiting for block devices as requested 00:23:09.356 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:09.616 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:09.616 18:21:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:23:09.616 18:21:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:23:09.616 18:21:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:23:09.616 18:21:40 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:23:09.616 18:21:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:23:09.616 18:21:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:23:09.616 18:21:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:23:09.616 18:21:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:23:09.616 18:21:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:23:09.616 18:21:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:23:09.616 18:21:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:23:09.616 18:21:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:23:09.616 18:21:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:23:09.616 18:21:40 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:23:09.616 18:21:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:23:09.616 18:21:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:23:09.616 18:21:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:23:09.616 18:21:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:23:09.616 18:21:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:23:09.616 18:21:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:23:09.616 18:21:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:23:09.616 18:21:40 -- common/autotest_common.sh@1543 -- # continue 00:23:09.616 18:21:40 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:23:09.616 18:21:40 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:23:09.616 18:21:40 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:23:09.616 18:21:40 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:23:09.616 18:21:40 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:23:09.616 18:21:40 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:23:09.616 18:21:40 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:23:09.616 18:21:40 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:23:09.616 18:21:40 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:23:09.616 18:21:40 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:23:09.616 18:21:40 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:23:09.616 18:21:40 -- common/autotest_common.sh@1531 -- # grep oacs 00:23:09.616 18:21:40 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:23:09.616 18:21:40 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:23:09.616 18:21:40 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:23:09.616 18:21:40 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:23:09.616 18:21:40 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:23:09.616 18:21:40 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:23:09.616 18:21:40 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:23:09.616 18:21:40 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:23:09.616 18:21:40 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:23:09.616 18:21:40 -- common/autotest_common.sh@1543 -- # continue 00:23:09.616 18:21:40 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:23:09.616 18:21:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.616 18:21:40 -- common/autotest_common.sh@10 -- # set +x 00:23:09.876 18:21:40 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:23:09.876 18:21:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.876 18:21:40 -- common/autotest_common.sh@10 -- # set +x 00:23:09.876 18:21:40 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:10.813 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:10.813 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:10.813 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:10.813 18:21:41 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:23:10.813 18:21:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:10.813 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:23:10.813 18:21:41 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:23:10.813 18:21:41 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:23:10.813 18:21:41 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:23:10.813 18:21:41 -- common/autotest_common.sh@1563 -- # bdfs=() 00:23:10.813 18:21:41 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:23:10.813 18:21:41 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:23:10.813 18:21:41 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:23:10.813 18:21:41 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:23:10.813 18:21:41 -- common/autotest_common.sh@1498 -- # bdfs=() 00:23:10.813 18:21:41 -- common/autotest_common.sh@1498 -- # local bdfs 00:23:10.813 18:21:41 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:23:10.813 18:21:41 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:10.813 18:21:41 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:23:11.072 18:21:41 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:23:11.072 18:21:41 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:23:11.072 18:21:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:23:11.072 18:21:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:23:11.072 18:21:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:23:11.072 18:21:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:23:11.072 18:21:41 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:23:11.072 18:21:41 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:23:11.072 18:21:41 -- common/autotest_common.sh@1566 -- # device=0x0010 00:23:11.072 18:21:41 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:23:11.072 18:21:41 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:23:11.072 18:21:41 -- common/autotest_common.sh@1572 -- # return 0 00:23:11.072 18:21:41 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:23:11.072 18:21:41 -- common/autotest_common.sh@1580 -- # return 0 00:23:11.072 18:21:41 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:23:11.072 18:21:41 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:23:11.072 18:21:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:23:11.072 18:21:41 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:23:11.072 18:21:41 -- spdk/autotest.sh@149 -- # timing_enter lib 00:23:11.072 18:21:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.072 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.072 18:21:41 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:23:11.072 18:21:41 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:23:11.072 18:21:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:11.072 18:21:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.072 18:21:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.072 ************************************ 00:23:11.072 START TEST env 00:23:11.072 ************************************ 00:23:11.072 18:21:41 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:23:11.072 * Looking for test storage... 00:23:11.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:23:11.072 18:21:41 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:11.072 18:21:41 env -- common/autotest_common.sh@1711 -- # lcov --version 00:23:11.072 18:21:41 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:11.332 18:21:42 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:11.332 18:21:42 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:11.332 18:21:42 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:11.332 18:21:42 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:11.332 18:21:42 env -- scripts/common.sh@336 -- # IFS=.-: 00:23:11.332 18:21:42 env -- scripts/common.sh@336 -- # read -ra ver1 00:23:11.332 18:21:42 env -- scripts/common.sh@337 -- # IFS=.-: 00:23:11.332 18:21:42 env -- scripts/common.sh@337 -- # read -ra ver2 00:23:11.332 18:21:42 env -- scripts/common.sh@338 -- # local 'op=<' 00:23:11.332 18:21:42 env -- scripts/common.sh@340 -- # ver1_l=2 00:23:11.332 18:21:42 env -- scripts/common.sh@341 -- # ver2_l=1 00:23:11.332 18:21:42 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:11.332 18:21:42 env -- scripts/common.sh@344 -- # case "$op" in 00:23:11.332 18:21:42 env -- scripts/common.sh@345 -- # : 1 00:23:11.332 18:21:42 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:11.332 18:21:42 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:11.332 18:21:42 env -- scripts/common.sh@365 -- # decimal 1 00:23:11.332 18:21:42 env -- scripts/common.sh@353 -- # local d=1 00:23:11.332 18:21:42 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:11.332 18:21:42 env -- scripts/common.sh@355 -- # echo 1 00:23:11.332 18:21:42 env -- scripts/common.sh@365 -- # ver1[v]=1 00:23:11.332 18:21:42 env -- scripts/common.sh@366 -- # decimal 2 00:23:11.332 18:21:42 env -- scripts/common.sh@353 -- # local d=2 00:23:11.332 18:21:42 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:11.332 18:21:42 env -- scripts/common.sh@355 -- # echo 2 00:23:11.332 18:21:42 env -- scripts/common.sh@366 -- # ver2[v]=2 00:23:11.332 18:21:42 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:11.332 18:21:42 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:11.332 18:21:42 env -- scripts/common.sh@368 -- # return 0 00:23:11.332 18:21:42 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:11.332 18:21:42 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:11.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.332 --rc genhtml_branch_coverage=1 00:23:11.332 --rc genhtml_function_coverage=1 00:23:11.332 --rc genhtml_legend=1 00:23:11.332 --rc geninfo_all_blocks=1 00:23:11.332 --rc geninfo_unexecuted_blocks=1 00:23:11.332 00:23:11.332 ' 00:23:11.332 18:21:42 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:11.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.332 --rc genhtml_branch_coverage=1 00:23:11.332 --rc genhtml_function_coverage=1 00:23:11.332 --rc genhtml_legend=1 00:23:11.332 --rc geninfo_all_blocks=1 00:23:11.332 --rc geninfo_unexecuted_blocks=1 00:23:11.332 00:23:11.332 ' 00:23:11.332 18:21:42 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:11.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.332 --rc genhtml_branch_coverage=1 00:23:11.332 --rc genhtml_function_coverage=1 00:23:11.332 --rc genhtml_legend=1 00:23:11.332 --rc geninfo_all_blocks=1 00:23:11.332 --rc geninfo_unexecuted_blocks=1 00:23:11.332 00:23:11.332 ' 00:23:11.332 18:21:42 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:11.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:11.332 --rc genhtml_branch_coverage=1 00:23:11.332 --rc genhtml_function_coverage=1 00:23:11.332 --rc genhtml_legend=1 00:23:11.332 --rc geninfo_all_blocks=1 00:23:11.332 --rc geninfo_unexecuted_blocks=1 00:23:11.332 00:23:11.332 ' 00:23:11.333 18:21:42 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:23:11.333 18:21:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:11.333 18:21:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.333 18:21:42 env -- common/autotest_common.sh@10 -- # set +x 00:23:11.333 ************************************ 00:23:11.333 START TEST env_memory 00:23:11.333 ************************************ 00:23:11.333 18:21:42 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:23:11.333 00:23:11.333 00:23:11.333 CUnit - A unit testing framework for C - Version 2.1-3 00:23:11.333 http://cunit.sourceforge.net/ 00:23:11.333 00:23:11.333 00:23:11.333 Suite: memory 00:23:11.333 Test: alloc and free memory map ...[2024-12-06 18:21:42.163461] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:23:11.333 passed 00:23:11.333 Test: mem map translation ...[2024-12-06 18:21:42.212961] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:23:11.333 [2024-12-06 18:21:42.213167] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:23:11.333 [2024-12-06 18:21:42.213401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:23:11.333 [2024-12-06 18:21:42.213660] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:23:11.591 passed 00:23:11.591 Test: mem map registration ...[2024-12-06 18:21:42.288289] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:23:11.591 [2024-12-06 18:21:42.288494] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:23:11.591 passed 00:23:11.591 Test: mem map adjacent registrations ...passed 00:23:11.591 00:23:11.591 Run Summary: Type Total Ran Passed Failed Inactive 00:23:11.591 suites 1 1 n/a 0 0 00:23:11.591 tests 4 4 4 0 0 00:23:11.591 asserts 152 152 152 0 n/a 00:23:11.591 00:23:11.591 Elapsed time = 0.269 seconds 00:23:11.591 00:23:11.591 real 0m0.339s 00:23:11.591 user 0m0.280s 00:23:11.591 sys 0m0.044s 00:23:11.591 18:21:42 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.591 18:21:42 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:23:11.591 ************************************ 00:23:11.591 END TEST env_memory 00:23:11.591 ************************************ 00:23:11.591 18:21:42 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:23:11.591 18:21:42 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:11.591 18:21:42 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.591 18:21:42 env -- common/autotest_common.sh@10 -- # set +x 00:23:11.591 ************************************ 00:23:11.591 START TEST env_vtophys 00:23:11.591 ************************************ 00:23:11.591 18:21:42 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:23:11.591 EAL: lib.eal log level changed from notice to debug 00:23:11.591 EAL: Detected lcore 0 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 1 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 2 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 3 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 4 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 5 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 6 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 7 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 8 as core 0 on socket 0 00:23:11.591 EAL: Detected lcore 9 as core 0 on socket 0 00:23:11.850 EAL: Maximum logical cores by configuration: 128 00:23:11.851 EAL: Detected CPU lcores: 10 00:23:11.851 EAL: Detected NUMA nodes: 1 00:23:11.851 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:23:11.851 EAL: Detected shared linkage of DPDK 00:23:11.851 EAL: No shared files mode enabled, IPC will be disabled 00:23:11.851 EAL: Selected IOVA mode 'PA' 00:23:11.851 EAL: Probing VFIO support... 00:23:11.851 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:23:11.851 EAL: VFIO modules not loaded, skipping VFIO support... 00:23:11.851 EAL: Ask a virtual area of 0x2e000 bytes 00:23:11.851 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:23:11.851 EAL: Setting up physically contiguous memory... 00:23:11.851 EAL: Setting maximum number of open files to 524288 00:23:11.851 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:23:11.851 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:23:11.851 EAL: Ask a virtual area of 0x61000 bytes 00:23:11.851 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:23:11.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:23:11.851 EAL: Ask a virtual area of 0x400000000 bytes 00:23:11.851 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:23:11.851 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:23:11.851 EAL: Ask a virtual area of 0x61000 bytes 00:23:11.851 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:23:11.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:23:11.851 EAL: Ask a virtual area of 0x400000000 bytes 00:23:11.851 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:23:11.851 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:23:11.851 EAL: Ask a virtual area of 0x61000 bytes 00:23:11.851 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:23:11.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:23:11.851 EAL: Ask a virtual area of 0x400000000 bytes 00:23:11.851 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:23:11.851 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:23:11.851 EAL: Ask a virtual area of 0x61000 bytes 00:23:11.851 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:23:11.851 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:23:11.851 EAL: Ask a virtual area of 0x400000000 bytes 00:23:11.851 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:23:11.851 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:23:11.851 EAL: Hugepages will be freed exactly as allocated. 00:23:11.851 EAL: No shared files mode enabled, IPC is disabled 00:23:11.851 EAL: No shared files mode enabled, IPC is disabled 00:23:11.851 EAL: TSC frequency is ~2490000 KHz 00:23:11.851 EAL: Main lcore 0 is ready (tid=7f910b839a40;cpuset=[0]) 00:23:11.851 EAL: Trying to obtain current memory policy. 00:23:11.851 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:11.851 EAL: Restoring previous memory policy: 0 00:23:11.851 EAL: request: mp_malloc_sync 00:23:11.851 EAL: No shared files mode enabled, IPC is disabled 00:23:11.851 EAL: Heap on socket 0 was expanded by 2MB 00:23:11.851 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:23:11.851 EAL: No PCI address specified using 'addr=' in: bus=pci 00:23:11.851 EAL: Mem event callback 'spdk:(nil)' registered 00:23:11.851 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:23:11.851 00:23:11.851 00:23:11.851 CUnit - A unit testing framework for C - Version 2.1-3 00:23:11.851 http://cunit.sourceforge.net/ 00:23:11.851 00:23:11.851 00:23:11.851 Suite: components_suite 00:23:12.420 Test: vtophys_malloc_test ...passed 00:23:12.420 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:23:12.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:12.420 EAL: Restoring previous memory policy: 4 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was expanded by 4MB 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was shrunk by 4MB 00:23:12.420 EAL: Trying to obtain current memory policy. 00:23:12.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:12.420 EAL: Restoring previous memory policy: 4 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was expanded by 6MB 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was shrunk by 6MB 00:23:12.420 EAL: Trying to obtain current memory policy. 00:23:12.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:12.420 EAL: Restoring previous memory policy: 4 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was expanded by 10MB 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was shrunk by 10MB 00:23:12.420 EAL: Trying to obtain current memory policy. 00:23:12.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:12.420 EAL: Restoring previous memory policy: 4 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was expanded by 18MB 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was shrunk by 18MB 00:23:12.420 EAL: Trying to obtain current memory policy. 00:23:12.420 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:12.420 EAL: Restoring previous memory policy: 4 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was expanded by 34MB 00:23:12.420 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.420 EAL: request: mp_malloc_sync 00:23:12.420 EAL: No shared files mode enabled, IPC is disabled 00:23:12.420 EAL: Heap on socket 0 was shrunk by 34MB 00:23:12.679 EAL: Trying to obtain current memory policy. 00:23:12.679 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:12.679 EAL: Restoring previous memory policy: 4 00:23:12.679 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.679 EAL: request: mp_malloc_sync 00:23:12.679 EAL: No shared files mode enabled, IPC is disabled 00:23:12.679 EAL: Heap on socket 0 was expanded by 66MB 00:23:12.679 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.679 EAL: request: mp_malloc_sync 00:23:12.679 EAL: No shared files mode enabled, IPC is disabled 00:23:12.679 EAL: Heap on socket 0 was shrunk by 66MB 00:23:12.937 EAL: Trying to obtain current memory policy. 00:23:12.937 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:12.937 EAL: Restoring previous memory policy: 4 00:23:12.937 EAL: Calling mem event callback 'spdk:(nil)' 00:23:12.937 EAL: request: mp_malloc_sync 00:23:12.937 EAL: No shared files mode enabled, IPC is disabled 00:23:12.937 EAL: Heap on socket 0 was expanded by 130MB 00:23:13.196 EAL: Calling mem event callback 'spdk:(nil)' 00:23:13.196 EAL: request: mp_malloc_sync 00:23:13.196 EAL: No shared files mode enabled, IPC is disabled 00:23:13.196 EAL: Heap on socket 0 was shrunk by 130MB 00:23:13.455 EAL: Trying to obtain current memory policy. 00:23:13.455 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:13.455 EAL: Restoring previous memory policy: 4 00:23:13.455 EAL: Calling mem event callback 'spdk:(nil)' 00:23:13.455 EAL: request: mp_malloc_sync 00:23:13.455 EAL: No shared files mode enabled, IPC is disabled 00:23:13.455 EAL: Heap on socket 0 was expanded by 258MB 00:23:14.022 EAL: Calling mem event callback 'spdk:(nil)' 00:23:14.022 EAL: request: mp_malloc_sync 00:23:14.022 EAL: No shared files mode enabled, IPC is disabled 00:23:14.022 EAL: Heap on socket 0 was shrunk by 258MB 00:23:14.308 EAL: Trying to obtain current memory policy. 00:23:14.308 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:14.565 EAL: Restoring previous memory policy: 4 00:23:14.565 EAL: Calling mem event callback 'spdk:(nil)' 00:23:14.565 EAL: request: mp_malloc_sync 00:23:14.565 EAL: No shared files mode enabled, IPC is disabled 00:23:14.565 EAL: Heap on socket 0 was expanded by 514MB 00:23:15.500 EAL: Calling mem event callback 'spdk:(nil)' 00:23:15.501 EAL: request: mp_malloc_sync 00:23:15.501 EAL: No shared files mode enabled, IPC is disabled 00:23:15.501 EAL: Heap on socket 0 was shrunk by 514MB 00:23:16.436 EAL: Trying to obtain current memory policy. 00:23:16.436 EAL: Setting policy MPOL_PREFERRED for socket 0 00:23:16.694 EAL: Restoring previous memory policy: 4 00:23:16.694 EAL: Calling mem event callback 'spdk:(nil)' 00:23:16.694 EAL: request: mp_malloc_sync 00:23:16.694 EAL: No shared files mode enabled, IPC is disabled 00:23:16.694 EAL: Heap on socket 0 was expanded by 1026MB 00:23:18.599 EAL: Calling mem event callback 'spdk:(nil)' 00:23:18.599 EAL: request: mp_malloc_sync 00:23:18.599 EAL: No shared files mode enabled, IPC is disabled 00:23:18.599 EAL: Heap on socket 0 was shrunk by 1026MB 00:23:20.505 passed 00:23:20.505 00:23:20.505 Run Summary: Type Total Ran Passed Failed Inactive 00:23:20.505 suites 1 1 n/a 0 0 00:23:20.505 tests 2 2 2 0 0 00:23:20.505 asserts 5768 5768 5768 0 n/a 00:23:20.505 00:23:20.505 Elapsed time = 8.511 seconds 00:23:20.505 EAL: Calling mem event callback 'spdk:(nil)' 00:23:20.505 EAL: request: mp_malloc_sync 00:23:20.505 EAL: No shared files mode enabled, IPC is disabled 00:23:20.505 EAL: Heap on socket 0 was shrunk by 2MB 00:23:20.505 EAL: No shared files mode enabled, IPC is disabled 00:23:20.505 EAL: No shared files mode enabled, IPC is disabled 00:23:20.505 EAL: No shared files mode enabled, IPC is disabled 00:23:20.505 00:23:20.505 real 0m8.859s 00:23:20.505 user 0m7.794s 00:23:20.505 sys 0m0.900s 00:23:20.505 18:21:51 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.505 18:21:51 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:23:20.506 ************************************ 00:23:20.506 END TEST env_vtophys 00:23:20.506 ************************************ 00:23:20.506 18:21:51 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:23:20.506 18:21:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:20.506 18:21:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.506 18:21:51 env -- common/autotest_common.sh@10 -- # set +x 00:23:20.506 ************************************ 00:23:20.506 START TEST env_pci 00:23:20.506 ************************************ 00:23:20.506 18:21:51 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:23:20.506 00:23:20.506 00:23:20.506 CUnit - A unit testing framework for C - Version 2.1-3 00:23:20.506 http://cunit.sourceforge.net/ 00:23:20.506 00:23:20.506 00:23:20.506 Suite: pci 00:23:20.506 Test: pci_hook ...[2024-12-06 18:21:51.448790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56504 has claimed it 00:23:20.764 passed 00:23:20.764 00:23:20.764 Run Summary: Type Total Ran Passed Failed Inactive 00:23:20.764 suites 1 1 n/a 0 0 00:23:20.764 tests 1 1 1 0 0 00:23:20.764 asserts 25 25 25 0 n/a 00:23:20.764 00:23:20.764 Elapsed time = 0.008 seconds 00:23:20.764 EAL: Cannot find device (10000:00:01.0) 00:23:20.765 EAL: Failed to attach device on primary process 00:23:20.765 00:23:20.765 real 0m0.107s 00:23:20.765 user 0m0.043s 00:23:20.765 sys 0m0.063s 00:23:20.765 18:21:51 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:20.765 18:21:51 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:23:20.765 ************************************ 00:23:20.765 END TEST env_pci 00:23:20.765 ************************************ 00:23:20.765 18:21:51 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:23:20.765 18:21:51 env -- env/env.sh@15 -- # uname 00:23:20.765 18:21:51 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:23:20.765 18:21:51 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:23:20.765 18:21:51 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:23:20.765 18:21:51 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:20.765 18:21:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:20.765 18:21:51 env -- common/autotest_common.sh@10 -- # set +x 00:23:20.765 ************************************ 00:23:20.765 START TEST env_dpdk_post_init 00:23:20.765 ************************************ 00:23:20.765 18:21:51 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:23:20.765 EAL: Detected CPU lcores: 10 00:23:20.765 EAL: Detected NUMA nodes: 1 00:23:20.765 EAL: Detected shared linkage of DPDK 00:23:20.765 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:23:20.765 EAL: Selected IOVA mode 'PA' 00:23:21.024 TELEMETRY: No legacy callbacks, legacy socket not created 00:23:21.024 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:23:21.024 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:23:21.024 Starting DPDK initialization... 00:23:21.024 Starting SPDK post initialization... 00:23:21.024 SPDK NVMe probe 00:23:21.024 Attaching to 0000:00:10.0 00:23:21.024 Attaching to 0000:00:11.0 00:23:21.024 Attached to 0000:00:10.0 00:23:21.024 Attached to 0000:00:11.0 00:23:21.024 Cleaning up... 00:23:21.024 00:23:21.024 real 0m0.301s 00:23:21.024 user 0m0.101s 00:23:21.024 sys 0m0.100s 00:23:21.024 18:21:51 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.024 18:21:51 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:23:21.024 ************************************ 00:23:21.024 END TEST env_dpdk_post_init 00:23:21.024 ************************************ 00:23:21.024 18:21:51 env -- env/env.sh@26 -- # uname 00:23:21.024 18:21:51 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:23:21.024 18:21:51 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:23:21.024 18:21:51 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:21.024 18:21:51 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.024 18:21:51 env -- common/autotest_common.sh@10 -- # set +x 00:23:21.284 ************************************ 00:23:21.284 START TEST env_mem_callbacks 00:23:21.284 ************************************ 00:23:21.284 18:21:51 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:23:21.284 EAL: Detected CPU lcores: 10 00:23:21.284 EAL: Detected NUMA nodes: 1 00:23:21.284 EAL: Detected shared linkage of DPDK 00:23:21.284 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:23:21.284 EAL: Selected IOVA mode 'PA' 00:23:21.284 00:23:21.284 00:23:21.284 CUnit - A unit testing framework for C - Version 2.1-3 00:23:21.284 http://cunit.sourceforge.net/ 00:23:21.284 00:23:21.284 00:23:21.284 Suite: memory 00:23:21.284 Test: test ... 00:23:21.284 register 0x200000200000 2097152 00:23:21.284 malloc 3145728 00:23:21.284 TELEMETRY: No legacy callbacks, legacy socket not created 00:23:21.284 register 0x200000400000 4194304 00:23:21.284 buf 0x2000004fffc0 len 3145728 PASSED 00:23:21.284 malloc 64 00:23:21.284 buf 0x2000004ffec0 len 64 PASSED 00:23:21.284 malloc 4194304 00:23:21.284 register 0x200000800000 6291456 00:23:21.284 buf 0x2000009fffc0 len 4194304 PASSED 00:23:21.284 free 0x2000004fffc0 3145728 00:23:21.284 free 0x2000004ffec0 64 00:23:21.284 unregister 0x200000400000 4194304 PASSED 00:23:21.284 free 0x2000009fffc0 4194304 00:23:21.284 unregister 0x200000800000 6291456 PASSED 00:23:21.284 malloc 8388608 00:23:21.284 register 0x200000400000 10485760 00:23:21.284 buf 0x2000005fffc0 len 8388608 PASSED 00:23:21.284 free 0x2000005fffc0 8388608 00:23:21.284 unregister 0x200000400000 10485760 PASSED 00:23:21.544 passed 00:23:21.544 00:23:21.544 Run Summary: Type Total Ran Passed Failed Inactive 00:23:21.544 suites 1 1 n/a 0 0 00:23:21.544 tests 1 1 1 0 0 00:23:21.544 asserts 15 15 15 0 n/a 00:23:21.544 00:23:21.544 Elapsed time = 0.079 seconds 00:23:21.544 00:23:21.544 real 0m0.286s 00:23:21.544 user 0m0.109s 00:23:21.544 sys 0m0.076s 00:23:21.544 18:21:52 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.544 18:21:52 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:23:21.544 ************************************ 00:23:21.544 END TEST env_mem_callbacks 00:23:21.544 ************************************ 00:23:21.544 00:23:21.544 real 0m10.479s 00:23:21.544 user 0m8.587s 00:23:21.544 sys 0m1.517s 00:23:21.544 18:21:52 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:21.544 18:21:52 env -- common/autotest_common.sh@10 -- # set +x 00:23:21.544 ************************************ 00:23:21.544 END TEST env 00:23:21.544 ************************************ 00:23:21.544 18:21:52 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:23:21.544 18:21:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:21.544 18:21:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:21.544 18:21:52 -- common/autotest_common.sh@10 -- # set +x 00:23:21.544 ************************************ 00:23:21.544 START TEST rpc 00:23:21.544 ************************************ 00:23:21.544 18:21:52 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:23:21.804 * Looking for test storage... 00:23:21.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:21.804 18:21:52 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:21.804 18:21:52 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:21.804 18:21:52 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:21.804 18:21:52 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:23:21.804 18:21:52 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:23:21.804 18:21:52 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:23:21.804 18:21:52 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:23:21.804 18:21:52 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:23:21.804 18:21:52 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:23:21.804 18:21:52 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:23:21.804 18:21:52 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:21.804 18:21:52 rpc -- scripts/common.sh@344 -- # case "$op" in 00:23:21.804 18:21:52 rpc -- scripts/common.sh@345 -- # : 1 00:23:21.804 18:21:52 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:21.804 18:21:52 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.804 18:21:52 rpc -- scripts/common.sh@365 -- # decimal 1 00:23:21.804 18:21:52 rpc -- scripts/common.sh@353 -- # local d=1 00:23:21.804 18:21:52 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:21.804 18:21:52 rpc -- scripts/common.sh@355 -- # echo 1 00:23:21.804 18:21:52 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:21.804 18:21:52 rpc -- scripts/common.sh@366 -- # decimal 2 00:23:21.804 18:21:52 rpc -- scripts/common.sh@353 -- # local d=2 00:23:21.804 18:21:52 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:21.804 18:21:52 rpc -- scripts/common.sh@355 -- # echo 2 00:23:21.804 18:21:52 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:21.804 18:21:52 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:21.804 18:21:52 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:21.804 18:21:52 rpc -- scripts/common.sh@368 -- # return 0 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:21.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.804 --rc genhtml_branch_coverage=1 00:23:21.804 --rc genhtml_function_coverage=1 00:23:21.804 --rc genhtml_legend=1 00:23:21.804 --rc geninfo_all_blocks=1 00:23:21.804 --rc geninfo_unexecuted_blocks=1 00:23:21.804 00:23:21.804 ' 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:21.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.804 --rc genhtml_branch_coverage=1 00:23:21.804 --rc genhtml_function_coverage=1 00:23:21.804 --rc genhtml_legend=1 00:23:21.804 --rc geninfo_all_blocks=1 00:23:21.804 --rc geninfo_unexecuted_blocks=1 00:23:21.804 00:23:21.804 ' 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:21.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.804 --rc genhtml_branch_coverage=1 00:23:21.804 --rc genhtml_function_coverage=1 00:23:21.804 --rc genhtml_legend=1 00:23:21.804 --rc geninfo_all_blocks=1 00:23:21.804 --rc geninfo_unexecuted_blocks=1 00:23:21.804 00:23:21.804 ' 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:21.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:21.804 --rc genhtml_branch_coverage=1 00:23:21.804 --rc genhtml_function_coverage=1 00:23:21.804 --rc genhtml_legend=1 00:23:21.804 --rc geninfo_all_blocks=1 00:23:21.804 --rc geninfo_unexecuted_blocks=1 00:23:21.804 00:23:21.804 ' 00:23:21.804 18:21:52 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56631 00:23:21.804 18:21:52 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:23:21.804 18:21:52 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:21.804 18:21:52 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56631 00:23:21.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@835 -- # '[' -z 56631 ']' 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:21.804 18:21:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:21.804 [2024-12-06 18:21:52.741597] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:21.804 [2024-12-06 18:21:52.741872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56631 ] 00:23:22.063 [2024-12-06 18:21:52.928320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.322 [2024-12-06 18:21:53.043143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:23:22.322 [2024-12-06 18:21:53.043401] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56631' to capture a snapshot of events at runtime. 00:23:22.322 [2024-12-06 18:21:53.043505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:22.322 [2024-12-06 18:21:53.043526] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:22.322 [2024-12-06 18:21:53.043537] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56631 for offline analysis/debug. 00:23:22.322 [2024-12-06 18:21:53.044948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:23.261 18:21:53 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.261 18:21:53 rpc -- common/autotest_common.sh@868 -- # return 0 00:23:23.261 18:21:53 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:23:23.261 18:21:53 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:23:23.261 18:21:53 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:23:23.261 18:21:53 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:23:23.261 18:21:53 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:23.261 18:21:53 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.261 18:21:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:23.261 ************************************ 00:23:23.261 START TEST rpc_integrity 00:23:23.261 ************************************ 00:23:23.261 18:21:53 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:23:23.261 18:21:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:23.261 18:21:53 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.261 18:21:53 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.261 18:21:53 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.261 18:21:53 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:23:23.261 18:21:53 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:23:23.261 { 00:23:23.261 "name": "Malloc0", 00:23:23.261 "aliases": [ 00:23:23.261 "306634d8-3f28-4acc-a208-91fd7e662281" 00:23:23.261 ], 00:23:23.261 "product_name": "Malloc disk", 00:23:23.261 "block_size": 512, 00:23:23.261 "num_blocks": 16384, 00:23:23.261 "uuid": "306634d8-3f28-4acc-a208-91fd7e662281", 00:23:23.261 "assigned_rate_limits": { 00:23:23.261 "rw_ios_per_sec": 0, 00:23:23.261 "rw_mbytes_per_sec": 0, 00:23:23.261 "r_mbytes_per_sec": 0, 00:23:23.261 "w_mbytes_per_sec": 0 00:23:23.261 }, 00:23:23.261 "claimed": false, 00:23:23.261 "zoned": false, 00:23:23.261 "supported_io_types": { 00:23:23.261 "read": true, 00:23:23.261 "write": true, 00:23:23.261 "unmap": true, 00:23:23.261 "flush": true, 00:23:23.261 "reset": true, 00:23:23.261 "nvme_admin": false, 00:23:23.261 "nvme_io": false, 00:23:23.261 "nvme_io_md": false, 00:23:23.261 "write_zeroes": true, 00:23:23.261 "zcopy": true, 00:23:23.261 "get_zone_info": false, 00:23:23.261 "zone_management": false, 00:23:23.261 "zone_append": false, 00:23:23.261 "compare": false, 00:23:23.261 "compare_and_write": false, 00:23:23.261 "abort": true, 00:23:23.261 "seek_hole": false, 00:23:23.261 "seek_data": false, 00:23:23.261 "copy": true, 00:23:23.261 "nvme_iov_md": false 00:23:23.261 }, 00:23:23.261 "memory_domains": [ 00:23:23.261 { 00:23:23.261 "dma_device_id": "system", 00:23:23.261 "dma_device_type": 1 00:23:23.261 }, 00:23:23.261 { 00:23:23.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.261 "dma_device_type": 2 00:23:23.261 } 00:23:23.261 ], 00:23:23.261 "driver_specific": {} 00:23:23.261 } 00:23:23.261 ]' 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.261 [2024-12-06 18:21:54.099123] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:23:23.261 [2024-12-06 18:21:54.099351] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:23.261 [2024-12-06 18:21:54.099407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:23:23.261 [2024-12-06 18:21:54.099432] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:23.261 [2024-12-06 18:21:54.102135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:23.261 [2024-12-06 18:21:54.102192] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:23:23.261 Passthru0 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.261 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.261 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:23:23.261 { 00:23:23.261 "name": "Malloc0", 00:23:23.261 "aliases": [ 00:23:23.261 "306634d8-3f28-4acc-a208-91fd7e662281" 00:23:23.261 ], 00:23:23.261 "product_name": "Malloc disk", 00:23:23.261 "block_size": 512, 00:23:23.261 "num_blocks": 16384, 00:23:23.261 "uuid": "306634d8-3f28-4acc-a208-91fd7e662281", 00:23:23.261 "assigned_rate_limits": { 00:23:23.261 "rw_ios_per_sec": 0, 00:23:23.261 "rw_mbytes_per_sec": 0, 00:23:23.261 "r_mbytes_per_sec": 0, 00:23:23.261 "w_mbytes_per_sec": 0 00:23:23.261 }, 00:23:23.261 "claimed": true, 00:23:23.261 "claim_type": "exclusive_write", 00:23:23.261 "zoned": false, 00:23:23.261 "supported_io_types": { 00:23:23.261 "read": true, 00:23:23.261 "write": true, 00:23:23.261 "unmap": true, 00:23:23.261 "flush": true, 00:23:23.261 "reset": true, 00:23:23.261 "nvme_admin": false, 00:23:23.262 "nvme_io": false, 00:23:23.262 "nvme_io_md": false, 00:23:23.262 "write_zeroes": true, 00:23:23.262 "zcopy": true, 00:23:23.262 "get_zone_info": false, 00:23:23.262 "zone_management": false, 00:23:23.262 "zone_append": false, 00:23:23.262 "compare": false, 00:23:23.262 "compare_and_write": false, 00:23:23.262 "abort": true, 00:23:23.262 "seek_hole": false, 00:23:23.262 "seek_data": false, 00:23:23.262 "copy": true, 00:23:23.262 "nvme_iov_md": false 00:23:23.262 }, 00:23:23.262 "memory_domains": [ 00:23:23.262 { 00:23:23.262 "dma_device_id": "system", 00:23:23.262 "dma_device_type": 1 00:23:23.262 }, 00:23:23.262 { 00:23:23.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.262 "dma_device_type": 2 00:23:23.262 } 00:23:23.262 ], 00:23:23.262 "driver_specific": {} 00:23:23.262 }, 00:23:23.262 { 00:23:23.262 "name": "Passthru0", 00:23:23.262 "aliases": [ 00:23:23.262 "a01a5fdb-de56-5833-8e54-57b67003ae50" 00:23:23.262 ], 00:23:23.262 "product_name": "passthru", 00:23:23.262 "block_size": 512, 00:23:23.262 "num_blocks": 16384, 00:23:23.262 "uuid": "a01a5fdb-de56-5833-8e54-57b67003ae50", 00:23:23.262 "assigned_rate_limits": { 00:23:23.262 "rw_ios_per_sec": 0, 00:23:23.262 "rw_mbytes_per_sec": 0, 00:23:23.262 "r_mbytes_per_sec": 0, 00:23:23.262 "w_mbytes_per_sec": 0 00:23:23.262 }, 00:23:23.262 "claimed": false, 00:23:23.262 "zoned": false, 00:23:23.262 "supported_io_types": { 00:23:23.262 "read": true, 00:23:23.262 "write": true, 00:23:23.262 "unmap": true, 00:23:23.262 "flush": true, 00:23:23.262 "reset": true, 00:23:23.262 "nvme_admin": false, 00:23:23.262 "nvme_io": false, 00:23:23.262 "nvme_io_md": false, 00:23:23.262 "write_zeroes": true, 00:23:23.262 "zcopy": true, 00:23:23.262 "get_zone_info": false, 00:23:23.262 "zone_management": false, 00:23:23.262 "zone_append": false, 00:23:23.262 "compare": false, 00:23:23.262 "compare_and_write": false, 00:23:23.262 "abort": true, 00:23:23.262 "seek_hole": false, 00:23:23.262 "seek_data": false, 00:23:23.262 "copy": true, 00:23:23.262 "nvme_iov_md": false 00:23:23.262 }, 00:23:23.262 "memory_domains": [ 00:23:23.262 { 00:23:23.262 "dma_device_id": "system", 00:23:23.262 "dma_device_type": 1 00:23:23.262 }, 00:23:23.262 { 00:23:23.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.262 "dma_device_type": 2 00:23:23.262 } 00:23:23.262 ], 00:23:23.262 "driver_specific": { 00:23:23.262 "passthru": { 00:23:23.262 "name": "Passthru0", 00:23:23.262 "base_bdev_name": "Malloc0" 00:23:23.262 } 00:23:23.262 } 00:23:23.262 } 00:23:23.262 ]' 00:23:23.262 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:23:23.262 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:23:23.262 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:23:23.262 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.262 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.262 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.262 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:23.262 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.262 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.521 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.521 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:23.521 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.521 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.521 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.521 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:23:23.521 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:23:23.521 ************************************ 00:23:23.521 END TEST rpc_integrity 00:23:23.521 ************************************ 00:23:23.521 18:21:54 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:23:23.521 00:23:23.521 real 0m0.334s 00:23:23.521 user 0m0.179s 00:23:23.521 sys 0m0.048s 00:23:23.521 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.521 18:21:54 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:23.521 18:21:54 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:23:23.521 18:21:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:23.521 18:21:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.521 18:21:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:23.521 ************************************ 00:23:23.521 START TEST rpc_plugins 00:23:23.521 ************************************ 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:23:23.521 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.521 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:23:23.521 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.521 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:23:23.521 { 00:23:23.521 "name": "Malloc1", 00:23:23.521 "aliases": [ 00:23:23.521 "f7a8a2de-9c8f-4d8b-bbff-e0ee25d55d03" 00:23:23.521 ], 00:23:23.521 "product_name": "Malloc disk", 00:23:23.521 "block_size": 4096, 00:23:23.521 "num_blocks": 256, 00:23:23.521 "uuid": "f7a8a2de-9c8f-4d8b-bbff-e0ee25d55d03", 00:23:23.521 "assigned_rate_limits": { 00:23:23.521 "rw_ios_per_sec": 0, 00:23:23.521 "rw_mbytes_per_sec": 0, 00:23:23.521 "r_mbytes_per_sec": 0, 00:23:23.521 "w_mbytes_per_sec": 0 00:23:23.521 }, 00:23:23.521 "claimed": false, 00:23:23.521 "zoned": false, 00:23:23.521 "supported_io_types": { 00:23:23.521 "read": true, 00:23:23.521 "write": true, 00:23:23.521 "unmap": true, 00:23:23.521 "flush": true, 00:23:23.521 "reset": true, 00:23:23.521 "nvme_admin": false, 00:23:23.521 "nvme_io": false, 00:23:23.521 "nvme_io_md": false, 00:23:23.521 "write_zeroes": true, 00:23:23.521 "zcopy": true, 00:23:23.521 "get_zone_info": false, 00:23:23.521 "zone_management": false, 00:23:23.521 "zone_append": false, 00:23:23.521 "compare": false, 00:23:23.521 "compare_and_write": false, 00:23:23.521 "abort": true, 00:23:23.521 "seek_hole": false, 00:23:23.521 "seek_data": false, 00:23:23.521 "copy": true, 00:23:23.521 "nvme_iov_md": false 00:23:23.521 }, 00:23:23.521 "memory_domains": [ 00:23:23.521 { 00:23:23.521 "dma_device_id": "system", 00:23:23.521 "dma_device_type": 1 00:23:23.521 }, 00:23:23.521 { 00:23:23.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:23.521 "dma_device_type": 2 00:23:23.521 } 00:23:23.521 ], 00:23:23.521 "driver_specific": {} 00:23:23.521 } 00:23:23.521 ]' 00:23:23.521 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:23:23.521 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:23:23.521 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.521 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:23.780 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.780 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:23:23.780 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.780 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:23.780 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.780 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:23:23.780 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:23:23.780 ************************************ 00:23:23.780 END TEST rpc_plugins 00:23:23.780 ************************************ 00:23:23.780 18:21:54 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:23:23.780 00:23:23.780 real 0m0.171s 00:23:23.780 user 0m0.088s 00:23:23.780 sys 0m0.032s 00:23:23.781 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.781 18:21:54 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:23:23.781 18:21:54 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:23:23.781 18:21:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:23.781 18:21:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.781 18:21:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:23.781 ************************************ 00:23:23.781 START TEST rpc_trace_cmd_test 00:23:23.781 ************************************ 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:23:23.781 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56631", 00:23:23.781 "tpoint_group_mask": "0x8", 00:23:23.781 "iscsi_conn": { 00:23:23.781 "mask": "0x2", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "scsi": { 00:23:23.781 "mask": "0x4", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "bdev": { 00:23:23.781 "mask": "0x8", 00:23:23.781 "tpoint_mask": "0xffffffffffffffff" 00:23:23.781 }, 00:23:23.781 "nvmf_rdma": { 00:23:23.781 "mask": "0x10", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "nvmf_tcp": { 00:23:23.781 "mask": "0x20", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "ftl": { 00:23:23.781 "mask": "0x40", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "blobfs": { 00:23:23.781 "mask": "0x80", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "dsa": { 00:23:23.781 "mask": "0x200", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "thread": { 00:23:23.781 "mask": "0x400", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "nvme_pcie": { 00:23:23.781 "mask": "0x800", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "iaa": { 00:23:23.781 "mask": "0x1000", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "nvme_tcp": { 00:23:23.781 "mask": "0x2000", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "bdev_nvme": { 00:23:23.781 "mask": "0x4000", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "sock": { 00:23:23.781 "mask": "0x8000", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "blob": { 00:23:23.781 "mask": "0x10000", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "bdev_raid": { 00:23:23.781 "mask": "0x20000", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 }, 00:23:23.781 "scheduler": { 00:23:23.781 "mask": "0x40000", 00:23:23.781 "tpoint_mask": "0x0" 00:23:23.781 } 00:23:23.781 }' 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:23:23.781 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:23:24.040 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:23:24.040 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:23:24.040 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:23:24.040 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:23:24.040 ************************************ 00:23:24.040 END TEST rpc_trace_cmd_test 00:23:24.040 ************************************ 00:23:24.040 18:21:54 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:23:24.040 00:23:24.040 real 0m0.260s 00:23:24.040 user 0m0.205s 00:23:24.040 sys 0m0.044s 00:23:24.040 18:21:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.040 18:21:54 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:23:24.040 18:21:54 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:23:24.040 18:21:54 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:23:24.040 18:21:54 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:23:24.040 18:21:54 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:24.040 18:21:54 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:24.040 18:21:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:24.040 ************************************ 00:23:24.040 START TEST rpc_daemon_integrity 00:23:24.040 ************************************ 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.040 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.302 18:21:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:23:24.302 18:21:54 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:23:24.302 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.302 18:21:54 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:23:24.302 { 00:23:24.302 "name": "Malloc2", 00:23:24.302 "aliases": [ 00:23:24.302 "8a888c84-db13-4eae-8a9c-3cb50cf28533" 00:23:24.302 ], 00:23:24.302 "product_name": "Malloc disk", 00:23:24.302 "block_size": 512, 00:23:24.302 "num_blocks": 16384, 00:23:24.302 "uuid": "8a888c84-db13-4eae-8a9c-3cb50cf28533", 00:23:24.302 "assigned_rate_limits": { 00:23:24.302 "rw_ios_per_sec": 0, 00:23:24.302 "rw_mbytes_per_sec": 0, 00:23:24.302 "r_mbytes_per_sec": 0, 00:23:24.302 "w_mbytes_per_sec": 0 00:23:24.302 }, 00:23:24.302 "claimed": false, 00:23:24.302 "zoned": false, 00:23:24.302 "supported_io_types": { 00:23:24.302 "read": true, 00:23:24.302 "write": true, 00:23:24.302 "unmap": true, 00:23:24.302 "flush": true, 00:23:24.302 "reset": true, 00:23:24.302 "nvme_admin": false, 00:23:24.302 "nvme_io": false, 00:23:24.302 "nvme_io_md": false, 00:23:24.302 "write_zeroes": true, 00:23:24.302 "zcopy": true, 00:23:24.302 "get_zone_info": false, 00:23:24.302 "zone_management": false, 00:23:24.302 "zone_append": false, 00:23:24.302 "compare": false, 00:23:24.302 "compare_and_write": false, 00:23:24.302 "abort": true, 00:23:24.302 "seek_hole": false, 00:23:24.302 "seek_data": false, 00:23:24.302 "copy": true, 00:23:24.302 "nvme_iov_md": false 00:23:24.302 }, 00:23:24.302 "memory_domains": [ 00:23:24.302 { 00:23:24.302 "dma_device_id": "system", 00:23:24.302 "dma_device_type": 1 00:23:24.302 }, 00:23:24.302 { 00:23:24.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.302 "dma_device_type": 2 00:23:24.302 } 00:23:24.302 ], 00:23:24.302 "driver_specific": {} 00:23:24.302 } 00:23:24.302 ]' 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 [2024-12-06 18:21:55.078261] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:23:24.302 [2024-12-06 18:21:55.078322] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:23:24.302 [2024-12-06 18:21:55.078344] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:23:24.302 [2024-12-06 18:21:55.078359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:23:24.302 [2024-12-06 18:21:55.080972] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:23:24.302 [2024-12-06 18:21:55.081022] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:23:24.302 Passthru0 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:23:24.302 { 00:23:24.302 "name": "Malloc2", 00:23:24.302 "aliases": [ 00:23:24.302 "8a888c84-db13-4eae-8a9c-3cb50cf28533" 00:23:24.302 ], 00:23:24.302 "product_name": "Malloc disk", 00:23:24.302 "block_size": 512, 00:23:24.302 "num_blocks": 16384, 00:23:24.302 "uuid": "8a888c84-db13-4eae-8a9c-3cb50cf28533", 00:23:24.302 "assigned_rate_limits": { 00:23:24.302 "rw_ios_per_sec": 0, 00:23:24.302 "rw_mbytes_per_sec": 0, 00:23:24.302 "r_mbytes_per_sec": 0, 00:23:24.302 "w_mbytes_per_sec": 0 00:23:24.302 }, 00:23:24.302 "claimed": true, 00:23:24.302 "claim_type": "exclusive_write", 00:23:24.302 "zoned": false, 00:23:24.302 "supported_io_types": { 00:23:24.302 "read": true, 00:23:24.302 "write": true, 00:23:24.302 "unmap": true, 00:23:24.302 "flush": true, 00:23:24.302 "reset": true, 00:23:24.302 "nvme_admin": false, 00:23:24.302 "nvme_io": false, 00:23:24.302 "nvme_io_md": false, 00:23:24.302 "write_zeroes": true, 00:23:24.302 "zcopy": true, 00:23:24.302 "get_zone_info": false, 00:23:24.302 "zone_management": false, 00:23:24.302 "zone_append": false, 00:23:24.302 "compare": false, 00:23:24.302 "compare_and_write": false, 00:23:24.302 "abort": true, 00:23:24.302 "seek_hole": false, 00:23:24.302 "seek_data": false, 00:23:24.302 "copy": true, 00:23:24.302 "nvme_iov_md": false 00:23:24.302 }, 00:23:24.302 "memory_domains": [ 00:23:24.302 { 00:23:24.302 "dma_device_id": "system", 00:23:24.302 "dma_device_type": 1 00:23:24.302 }, 00:23:24.302 { 00:23:24.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.302 "dma_device_type": 2 00:23:24.302 } 00:23:24.302 ], 00:23:24.302 "driver_specific": {} 00:23:24.302 }, 00:23:24.302 { 00:23:24.302 "name": "Passthru0", 00:23:24.302 "aliases": [ 00:23:24.302 "cebba5b9-9a43-5a52-bf46-c2ab164c639d" 00:23:24.302 ], 00:23:24.302 "product_name": "passthru", 00:23:24.302 "block_size": 512, 00:23:24.302 "num_blocks": 16384, 00:23:24.302 "uuid": "cebba5b9-9a43-5a52-bf46-c2ab164c639d", 00:23:24.302 "assigned_rate_limits": { 00:23:24.302 "rw_ios_per_sec": 0, 00:23:24.302 "rw_mbytes_per_sec": 0, 00:23:24.302 "r_mbytes_per_sec": 0, 00:23:24.302 "w_mbytes_per_sec": 0 00:23:24.302 }, 00:23:24.302 "claimed": false, 00:23:24.302 "zoned": false, 00:23:24.302 "supported_io_types": { 00:23:24.302 "read": true, 00:23:24.302 "write": true, 00:23:24.302 "unmap": true, 00:23:24.302 "flush": true, 00:23:24.302 "reset": true, 00:23:24.302 "nvme_admin": false, 00:23:24.302 "nvme_io": false, 00:23:24.302 "nvme_io_md": false, 00:23:24.302 "write_zeroes": true, 00:23:24.302 "zcopy": true, 00:23:24.302 "get_zone_info": false, 00:23:24.302 "zone_management": false, 00:23:24.302 "zone_append": false, 00:23:24.302 "compare": false, 00:23:24.302 "compare_and_write": false, 00:23:24.302 "abort": true, 00:23:24.302 "seek_hole": false, 00:23:24.302 "seek_data": false, 00:23:24.302 "copy": true, 00:23:24.302 "nvme_iov_md": false 00:23:24.302 }, 00:23:24.302 "memory_domains": [ 00:23:24.302 { 00:23:24.302 "dma_device_id": "system", 00:23:24.302 "dma_device_type": 1 00:23:24.302 }, 00:23:24.302 { 00:23:24.302 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:23:24.302 "dma_device_type": 2 00:23:24.302 } 00:23:24.302 ], 00:23:24.302 "driver_specific": { 00:23:24.302 "passthru": { 00:23:24.302 "name": "Passthru0", 00:23:24.302 "base_bdev_name": "Malloc2" 00:23:24.302 } 00:23:24.302 } 00:23:24.302 } 00:23:24.302 ]' 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:23:24.302 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:23:24.561 ************************************ 00:23:24.561 END TEST rpc_daemon_integrity 00:23:24.561 ************************************ 00:23:24.561 18:21:55 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:23:24.561 00:23:24.561 real 0m0.367s 00:23:24.561 user 0m0.192s 00:23:24.561 sys 0m0.069s 00:23:24.561 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:24.561 18:21:55 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:23:24.561 18:21:55 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:24.561 18:21:55 rpc -- rpc/rpc.sh@84 -- # killprocess 56631 00:23:24.561 18:21:55 rpc -- common/autotest_common.sh@954 -- # '[' -z 56631 ']' 00:23:24.561 18:21:55 rpc -- common/autotest_common.sh@958 -- # kill -0 56631 00:23:24.561 18:21:55 rpc -- common/autotest_common.sh@959 -- # uname 00:23:24.562 18:21:55 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.562 18:21:55 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56631 00:23:24.562 killing process with pid 56631 00:23:24.562 18:21:55 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:24.562 18:21:55 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:24.562 18:21:55 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56631' 00:23:24.562 18:21:55 rpc -- common/autotest_common.sh@973 -- # kill 56631 00:23:24.562 18:21:55 rpc -- common/autotest_common.sh@978 -- # wait 56631 00:23:27.141 ************************************ 00:23:27.141 END TEST rpc 00:23:27.141 ************************************ 00:23:27.141 00:23:27.141 real 0m5.582s 00:23:27.141 user 0m6.083s 00:23:27.141 sys 0m0.977s 00:23:27.141 18:21:57 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:27.141 18:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:23:27.141 18:21:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:23:27.141 18:21:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:27.141 18:21:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.141 18:21:58 -- common/autotest_common.sh@10 -- # set +x 00:23:27.141 ************************************ 00:23:27.141 START TEST skip_rpc 00:23:27.141 ************************************ 00:23:27.141 18:21:58 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:23:27.416 * Looking for test storage... 00:23:27.416 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:23:27.416 18:21:58 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:27.416 18:21:58 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:27.416 18:21:58 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:27.416 18:21:58 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:23:27.416 18:21:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:27.417 18:21:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:23:27.417 18:21:58 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:27.417 18:21:58 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:27.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.417 --rc genhtml_branch_coverage=1 00:23:27.417 --rc genhtml_function_coverage=1 00:23:27.417 --rc genhtml_legend=1 00:23:27.417 --rc geninfo_all_blocks=1 00:23:27.417 --rc geninfo_unexecuted_blocks=1 00:23:27.417 00:23:27.417 ' 00:23:27.417 18:21:58 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:27.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.417 --rc genhtml_branch_coverage=1 00:23:27.417 --rc genhtml_function_coverage=1 00:23:27.417 --rc genhtml_legend=1 00:23:27.417 --rc geninfo_all_blocks=1 00:23:27.417 --rc geninfo_unexecuted_blocks=1 00:23:27.417 00:23:27.417 ' 00:23:27.417 18:21:58 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:27.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.417 --rc genhtml_branch_coverage=1 00:23:27.417 --rc genhtml_function_coverage=1 00:23:27.417 --rc genhtml_legend=1 00:23:27.417 --rc geninfo_all_blocks=1 00:23:27.417 --rc geninfo_unexecuted_blocks=1 00:23:27.417 00:23:27.417 ' 00:23:27.417 18:21:58 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:27.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:27.417 --rc genhtml_branch_coverage=1 00:23:27.417 --rc genhtml_function_coverage=1 00:23:27.417 --rc genhtml_legend=1 00:23:27.417 --rc geninfo_all_blocks=1 00:23:27.417 --rc geninfo_unexecuted_blocks=1 00:23:27.417 00:23:27.417 ' 00:23:27.417 18:21:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:27.417 18:21:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:27.417 18:21:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:23:27.417 18:21:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:27.417 18:21:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:27.417 18:21:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:27.417 ************************************ 00:23:27.417 START TEST skip_rpc 00:23:27.417 ************************************ 00:23:27.417 18:21:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:23:27.417 18:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56866 00:23:27.417 18:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:23:27.417 18:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:27.417 18:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:23:27.676 [2024-12-06 18:21:58.403180] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:27.676 [2024-12-06 18:21:58.403304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56866 ] 00:23:27.676 [2024-12-06 18:21:58.582282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.935 [2024-12-06 18:21:58.695686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56866 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56866 ']' 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56866 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56866 00:23:33.209 killing process with pid 56866 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56866' 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56866 00:23:33.209 18:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56866 00:23:35.113 00:23:35.113 real 0m7.518s 00:23:35.113 user 0m7.031s 00:23:35.113 sys 0m0.407s 00:23:35.113 ************************************ 00:23:35.113 END TEST skip_rpc 00:23:35.113 ************************************ 00:23:35.113 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:35.113 18:22:05 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:35.113 18:22:05 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:23:35.113 18:22:05 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:35.113 18:22:05 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:35.113 18:22:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:35.113 ************************************ 00:23:35.113 START TEST skip_rpc_with_json 00:23:35.113 ************************************ 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=56974 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 56974 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 56974 ']' 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.113 18:22:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:35.113 [2024-12-06 18:22:05.995326] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:35.113 [2024-12-06 18:22:05.995459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56974 ] 00:23:35.372 [2024-12-06 18:22:06.177790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.372 [2024-12-06 18:22:06.294620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.324 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.324 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:23:36.324 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:36.325 [2024-12-06 18:22:07.191764] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:23:36.325 request: 00:23:36.325 { 00:23:36.325 "trtype": "tcp", 00:23:36.325 "method": "nvmf_get_transports", 00:23:36.325 "req_id": 1 00:23:36.325 } 00:23:36.325 Got JSON-RPC error response 00:23:36.325 response: 00:23:36.325 { 00:23:36.325 "code": -19, 00:23:36.325 "message": "No such device" 00:23:36.325 } 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:36.325 [2024-12-06 18:22:07.207837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.325 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:36.584 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.584 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:36.584 { 00:23:36.584 "subsystems": [ 00:23:36.584 { 00:23:36.584 "subsystem": "fsdev", 00:23:36.584 "config": [ 00:23:36.584 { 00:23:36.584 "method": "fsdev_set_opts", 00:23:36.584 "params": { 00:23:36.584 "fsdev_io_pool_size": 65535, 00:23:36.584 "fsdev_io_cache_size": 256 00:23:36.584 } 00:23:36.584 } 00:23:36.584 ] 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "keyring", 00:23:36.584 "config": [] 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "iobuf", 00:23:36.584 "config": [ 00:23:36.584 { 00:23:36.584 "method": "iobuf_set_options", 00:23:36.584 "params": { 00:23:36.584 "small_pool_count": 8192, 00:23:36.584 "large_pool_count": 1024, 00:23:36.584 "small_bufsize": 8192, 00:23:36.584 "large_bufsize": 135168, 00:23:36.584 "enable_numa": false 00:23:36.584 } 00:23:36.584 } 00:23:36.584 ] 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "sock", 00:23:36.584 "config": [ 00:23:36.584 { 00:23:36.584 "method": "sock_set_default_impl", 00:23:36.584 "params": { 00:23:36.584 "impl_name": "posix" 00:23:36.584 } 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "method": "sock_impl_set_options", 00:23:36.584 "params": { 00:23:36.584 "impl_name": "ssl", 00:23:36.584 "recv_buf_size": 4096, 00:23:36.584 "send_buf_size": 4096, 00:23:36.584 "enable_recv_pipe": true, 00:23:36.584 "enable_quickack": false, 00:23:36.584 "enable_placement_id": 0, 00:23:36.584 "enable_zerocopy_send_server": true, 00:23:36.584 "enable_zerocopy_send_client": false, 00:23:36.584 "zerocopy_threshold": 0, 00:23:36.584 "tls_version": 0, 00:23:36.584 "enable_ktls": false 00:23:36.584 } 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "method": "sock_impl_set_options", 00:23:36.584 "params": { 00:23:36.584 "impl_name": "posix", 00:23:36.584 "recv_buf_size": 2097152, 00:23:36.584 "send_buf_size": 2097152, 00:23:36.584 "enable_recv_pipe": true, 00:23:36.584 "enable_quickack": false, 00:23:36.584 "enable_placement_id": 0, 00:23:36.584 "enable_zerocopy_send_server": true, 00:23:36.584 "enable_zerocopy_send_client": false, 00:23:36.584 "zerocopy_threshold": 0, 00:23:36.584 "tls_version": 0, 00:23:36.584 "enable_ktls": false 00:23:36.584 } 00:23:36.584 } 00:23:36.584 ] 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "vmd", 00:23:36.584 "config": [] 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "accel", 00:23:36.584 "config": [ 00:23:36.584 { 00:23:36.584 "method": "accel_set_options", 00:23:36.584 "params": { 00:23:36.584 "small_cache_size": 128, 00:23:36.584 "large_cache_size": 16, 00:23:36.584 "task_count": 2048, 00:23:36.584 "sequence_count": 2048, 00:23:36.584 "buf_count": 2048 00:23:36.584 } 00:23:36.584 } 00:23:36.584 ] 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "bdev", 00:23:36.584 "config": [ 00:23:36.584 { 00:23:36.584 "method": "bdev_set_options", 00:23:36.584 "params": { 00:23:36.584 "bdev_io_pool_size": 65535, 00:23:36.584 "bdev_io_cache_size": 256, 00:23:36.584 "bdev_auto_examine": true, 00:23:36.584 "iobuf_small_cache_size": 128, 00:23:36.584 "iobuf_large_cache_size": 16 00:23:36.584 } 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "method": "bdev_raid_set_options", 00:23:36.584 "params": { 00:23:36.584 "process_window_size_kb": 1024, 00:23:36.584 "process_max_bandwidth_mb_sec": 0 00:23:36.584 } 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "method": "bdev_iscsi_set_options", 00:23:36.584 "params": { 00:23:36.584 "timeout_sec": 30 00:23:36.584 } 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "method": "bdev_nvme_set_options", 00:23:36.584 "params": { 00:23:36.584 "action_on_timeout": "none", 00:23:36.584 "timeout_us": 0, 00:23:36.584 "timeout_admin_us": 0, 00:23:36.584 "keep_alive_timeout_ms": 10000, 00:23:36.584 "arbitration_burst": 0, 00:23:36.584 "low_priority_weight": 0, 00:23:36.584 "medium_priority_weight": 0, 00:23:36.584 "high_priority_weight": 0, 00:23:36.584 "nvme_adminq_poll_period_us": 10000, 00:23:36.584 "nvme_ioq_poll_period_us": 0, 00:23:36.584 "io_queue_requests": 0, 00:23:36.584 "delay_cmd_submit": true, 00:23:36.584 "transport_retry_count": 4, 00:23:36.584 "bdev_retry_count": 3, 00:23:36.584 "transport_ack_timeout": 0, 00:23:36.584 "ctrlr_loss_timeout_sec": 0, 00:23:36.584 "reconnect_delay_sec": 0, 00:23:36.584 "fast_io_fail_timeout_sec": 0, 00:23:36.584 "disable_auto_failback": false, 00:23:36.584 "generate_uuids": false, 00:23:36.584 "transport_tos": 0, 00:23:36.584 "nvme_error_stat": false, 00:23:36.584 "rdma_srq_size": 0, 00:23:36.584 "io_path_stat": false, 00:23:36.584 "allow_accel_sequence": false, 00:23:36.584 "rdma_max_cq_size": 0, 00:23:36.584 "rdma_cm_event_timeout_ms": 0, 00:23:36.584 "dhchap_digests": [ 00:23:36.584 "sha256", 00:23:36.584 "sha384", 00:23:36.584 "sha512" 00:23:36.584 ], 00:23:36.584 "dhchap_dhgroups": [ 00:23:36.584 "null", 00:23:36.584 "ffdhe2048", 00:23:36.584 "ffdhe3072", 00:23:36.584 "ffdhe4096", 00:23:36.584 "ffdhe6144", 00:23:36.584 "ffdhe8192" 00:23:36.584 ], 00:23:36.584 "rdma_umr_per_io": false 00:23:36.584 } 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "method": "bdev_nvme_set_hotplug", 00:23:36.584 "params": { 00:23:36.584 "period_us": 100000, 00:23:36.584 "enable": false 00:23:36.584 } 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "method": "bdev_wait_for_examine" 00:23:36.584 } 00:23:36.584 ] 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "scsi", 00:23:36.584 "config": null 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "scheduler", 00:23:36.584 "config": [ 00:23:36.584 { 00:23:36.584 "method": "framework_set_scheduler", 00:23:36.584 "params": { 00:23:36.584 "name": "static" 00:23:36.584 } 00:23:36.584 } 00:23:36.584 ] 00:23:36.584 }, 00:23:36.584 { 00:23:36.584 "subsystem": "vhost_scsi", 00:23:36.585 "config": [] 00:23:36.585 }, 00:23:36.585 { 00:23:36.585 "subsystem": "vhost_blk", 00:23:36.585 "config": [] 00:23:36.585 }, 00:23:36.585 { 00:23:36.585 "subsystem": "ublk", 00:23:36.585 "config": [] 00:23:36.585 }, 00:23:36.585 { 00:23:36.585 "subsystem": "nbd", 00:23:36.585 "config": [] 00:23:36.585 }, 00:23:36.585 { 00:23:36.585 "subsystem": "nvmf", 00:23:36.585 "config": [ 00:23:36.585 { 00:23:36.585 "method": "nvmf_set_config", 00:23:36.585 "params": { 00:23:36.585 "discovery_filter": "match_any", 00:23:36.585 "admin_cmd_passthru": { 00:23:36.585 "identify_ctrlr": false 00:23:36.585 }, 00:23:36.585 "dhchap_digests": [ 00:23:36.585 "sha256", 00:23:36.585 "sha384", 00:23:36.585 "sha512" 00:23:36.585 ], 00:23:36.585 "dhchap_dhgroups": [ 00:23:36.585 "null", 00:23:36.585 "ffdhe2048", 00:23:36.585 "ffdhe3072", 00:23:36.585 "ffdhe4096", 00:23:36.585 "ffdhe6144", 00:23:36.585 "ffdhe8192" 00:23:36.585 ] 00:23:36.585 } 00:23:36.585 }, 00:23:36.585 { 00:23:36.585 "method": "nvmf_set_max_subsystems", 00:23:36.585 "params": { 00:23:36.585 "max_subsystems": 1024 00:23:36.585 } 00:23:36.585 }, 00:23:36.585 { 00:23:36.585 "method": "nvmf_set_crdt", 00:23:36.585 "params": { 00:23:36.585 "crdt1": 0, 00:23:36.585 "crdt2": 0, 00:23:36.585 "crdt3": 0 00:23:36.585 } 00:23:36.585 }, 00:23:36.585 { 00:23:36.585 "method": "nvmf_create_transport", 00:23:36.585 "params": { 00:23:36.585 "trtype": "TCP", 00:23:36.585 "max_queue_depth": 128, 00:23:36.585 "max_io_qpairs_per_ctrlr": 127, 00:23:36.585 "in_capsule_data_size": 4096, 00:23:36.585 "max_io_size": 131072, 00:23:36.585 "io_unit_size": 131072, 00:23:36.585 "max_aq_depth": 128, 00:23:36.585 "num_shared_buffers": 511, 00:23:36.585 "buf_cache_size": 4294967295, 00:23:36.585 "dif_insert_or_strip": false, 00:23:36.585 "zcopy": false, 00:23:36.585 "c2h_success": true, 00:23:36.585 "sock_priority": 0, 00:23:36.585 "abort_timeout_sec": 1, 00:23:36.585 "ack_timeout": 0, 00:23:36.585 "data_wr_pool_size": 0 00:23:36.585 } 00:23:36.585 } 00:23:36.585 ] 00:23:36.585 }, 00:23:36.585 { 00:23:36.585 "subsystem": "iscsi", 00:23:36.585 "config": [ 00:23:36.585 { 00:23:36.585 "method": "iscsi_set_options", 00:23:36.585 "params": { 00:23:36.585 "node_base": "iqn.2016-06.io.spdk", 00:23:36.585 "max_sessions": 128, 00:23:36.585 "max_connections_per_session": 2, 00:23:36.585 "max_queue_depth": 64, 00:23:36.585 "default_time2wait": 2, 00:23:36.585 "default_time2retain": 20, 00:23:36.585 "first_burst_length": 8192, 00:23:36.585 "immediate_data": true, 00:23:36.585 "allow_duplicated_isid": false, 00:23:36.585 "error_recovery_level": 0, 00:23:36.585 "nop_timeout": 60, 00:23:36.585 "nop_in_interval": 30, 00:23:36.585 "disable_chap": false, 00:23:36.585 "require_chap": false, 00:23:36.585 "mutual_chap": false, 00:23:36.585 "chap_group": 0, 00:23:36.585 "max_large_datain_per_connection": 64, 00:23:36.585 "max_r2t_per_connection": 4, 00:23:36.585 "pdu_pool_size": 36864, 00:23:36.585 "immediate_data_pool_size": 16384, 00:23:36.585 "data_out_pool_size": 2048 00:23:36.585 } 00:23:36.585 } 00:23:36.585 ] 00:23:36.585 } 00:23:36.585 ] 00:23:36.585 } 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 56974 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 56974 ']' 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 56974 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56974 00:23:36.585 killing process with pid 56974 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56974' 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 56974 00:23:36.585 18:22:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 56974 00:23:39.129 18:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57026 00:23:39.129 18:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:39.129 18:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57026 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57026 ']' 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57026 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57026 00:23:44.432 killing process with pid 57026 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.432 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57026' 00:23:44.433 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57026 00:23:44.433 18:22:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57026 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:23:46.975 00:23:46.975 real 0m11.491s 00:23:46.975 user 0m10.900s 00:23:46.975 sys 0m0.917s 00:23:46.975 ************************************ 00:23:46.975 END TEST skip_rpc_with_json 00:23:46.975 ************************************ 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:23:46.975 18:22:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:23:46.975 18:22:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.975 18:22:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.975 18:22:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:46.975 ************************************ 00:23:46.975 START TEST skip_rpc_with_delay 00:23:46.975 ************************************ 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:23:46.975 [2024-12-06 18:22:17.563704] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.975 00:23:46.975 real 0m0.168s 00:23:46.975 user 0m0.084s 00:23:46.975 sys 0m0.083s 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:46.975 ************************************ 00:23:46.975 END TEST skip_rpc_with_delay 00:23:46.975 ************************************ 00:23:46.975 18:22:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:23:46.975 18:22:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:23:46.975 18:22:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:23:46.975 18:22:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:23:46.975 18:22:17 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:46.975 18:22:17 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:46.975 18:22:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:46.975 ************************************ 00:23:46.975 START TEST exit_on_failed_rpc_init 00:23:46.975 ************************************ 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57165 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57165 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57165 ']' 00:23:46.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:46.975 18:22:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:23:46.975 [2024-12-06 18:22:17.799067] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:46.975 [2024-12-06 18:22:17.799240] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57165 ] 00:23:47.234 [2024-12-06 18:22:17.976611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.234 [2024-12-06 18:22:18.093029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:23:48.172 18:22:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:23:48.172 [2024-12-06 18:22:19.097401] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:48.172 [2024-12-06 18:22:19.097742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57183 ] 00:23:48.432 [2024-12-06 18:22:19.281488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.691 [2024-12-06 18:22:19.400408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.691 [2024-12-06 18:22:19.400520] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:23:48.691 [2024-12-06 18:22:19.400539] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:23:48.691 [2024-12-06 18:22:19.400557] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57165 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57165 ']' 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57165 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57165 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57165' 00:23:48.950 killing process with pid 57165 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57165 00:23:48.950 18:22:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57165 00:23:51.486 ************************************ 00:23:51.486 END TEST exit_on_failed_rpc_init 00:23:51.486 ************************************ 00:23:51.486 00:23:51.486 real 0m4.461s 00:23:51.486 user 0m4.791s 00:23:51.486 sys 0m0.601s 00:23:51.486 18:22:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.486 18:22:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:23:51.486 18:22:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:23:51.486 00:23:51.486 real 0m24.178s 00:23:51.486 user 0m23.028s 00:23:51.486 sys 0m2.335s 00:23:51.486 18:22:22 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.486 18:22:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:51.486 ************************************ 00:23:51.486 END TEST skip_rpc 00:23:51.486 ************************************ 00:23:51.486 18:22:22 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:23:51.486 18:22:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:51.486 18:22:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.486 18:22:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.486 ************************************ 00:23:51.486 START TEST rpc_client 00:23:51.486 ************************************ 00:23:51.486 18:22:22 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:23:51.486 * Looking for test storage... 00:23:51.486 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:23:51.486 18:22:22 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:51.486 18:22:22 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:23:51.486 18:22:22 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:51.777 18:22:22 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@345 -- # : 1 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@353 -- # local d=1 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@355 -- # echo 1 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@353 -- # local d=2 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@355 -- # echo 2 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:51.777 18:22:22 rpc_client -- scripts/common.sh@368 -- # return 0 00:23:51.777 18:22:22 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.777 18:22:22 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.777 --rc genhtml_branch_coverage=1 00:23:51.777 --rc genhtml_function_coverage=1 00:23:51.777 --rc genhtml_legend=1 00:23:51.777 --rc geninfo_all_blocks=1 00:23:51.777 --rc geninfo_unexecuted_blocks=1 00:23:51.777 00:23:51.777 ' 00:23:51.777 18:22:22 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.777 --rc genhtml_branch_coverage=1 00:23:51.777 --rc genhtml_function_coverage=1 00:23:51.777 --rc genhtml_legend=1 00:23:51.777 --rc geninfo_all_blocks=1 00:23:51.777 --rc geninfo_unexecuted_blocks=1 00:23:51.777 00:23:51.777 ' 00:23:51.777 18:22:22 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.777 --rc genhtml_branch_coverage=1 00:23:51.777 --rc genhtml_function_coverage=1 00:23:51.777 --rc genhtml_legend=1 00:23:51.777 --rc geninfo_all_blocks=1 00:23:51.777 --rc geninfo_unexecuted_blocks=1 00:23:51.777 00:23:51.777 ' 00:23:51.777 18:22:22 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:51.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.777 --rc genhtml_branch_coverage=1 00:23:51.777 --rc genhtml_function_coverage=1 00:23:51.777 --rc genhtml_legend=1 00:23:51.777 --rc geninfo_all_blocks=1 00:23:51.777 --rc geninfo_unexecuted_blocks=1 00:23:51.777 00:23:51.777 ' 00:23:51.777 18:22:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:23:51.777 OK 00:23:51.777 18:22:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:23:51.777 00:23:51.777 real 0m0.335s 00:23:51.777 user 0m0.177s 00:23:51.777 sys 0m0.172s 00:23:51.777 18:22:22 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:51.777 18:22:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:23:51.777 ************************************ 00:23:51.777 END TEST rpc_client 00:23:51.777 ************************************ 00:23:51.777 18:22:22 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:23:51.777 18:22:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:51.777 18:22:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:51.777 18:22:22 -- common/autotest_common.sh@10 -- # set +x 00:23:51.777 ************************************ 00:23:51.777 START TEST json_config 00:23:51.777 ************************************ 00:23:51.777 18:22:22 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:52.038 18:22:22 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.038 18:22:22 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.038 18:22:22 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.038 18:22:22 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.038 18:22:22 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.038 18:22:22 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.038 18:22:22 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.038 18:22:22 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.038 18:22:22 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.038 18:22:22 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.038 18:22:22 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.038 18:22:22 json_config -- scripts/common.sh@344 -- # case "$op" in 00:23:52.038 18:22:22 json_config -- scripts/common.sh@345 -- # : 1 00:23:52.038 18:22:22 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.038 18:22:22 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.038 18:22:22 json_config -- scripts/common.sh@365 -- # decimal 1 00:23:52.038 18:22:22 json_config -- scripts/common.sh@353 -- # local d=1 00:23:52.038 18:22:22 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.038 18:22:22 json_config -- scripts/common.sh@355 -- # echo 1 00:23:52.038 18:22:22 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.038 18:22:22 json_config -- scripts/common.sh@366 -- # decimal 2 00:23:52.038 18:22:22 json_config -- scripts/common.sh@353 -- # local d=2 00:23:52.038 18:22:22 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.038 18:22:22 json_config -- scripts/common.sh@355 -- # echo 2 00:23:52.038 18:22:22 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.038 18:22:22 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.038 18:22:22 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.038 18:22:22 json_config -- scripts/common.sh@368 -- # return 0 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:52.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.038 --rc genhtml_branch_coverage=1 00:23:52.038 --rc genhtml_function_coverage=1 00:23:52.038 --rc genhtml_legend=1 00:23:52.038 --rc geninfo_all_blocks=1 00:23:52.038 --rc geninfo_unexecuted_blocks=1 00:23:52.038 00:23:52.038 ' 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:52.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.038 --rc genhtml_branch_coverage=1 00:23:52.038 --rc genhtml_function_coverage=1 00:23:52.038 --rc genhtml_legend=1 00:23:52.038 --rc geninfo_all_blocks=1 00:23:52.038 --rc geninfo_unexecuted_blocks=1 00:23:52.038 00:23:52.038 ' 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:52.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.038 --rc genhtml_branch_coverage=1 00:23:52.038 --rc genhtml_function_coverage=1 00:23:52.038 --rc genhtml_legend=1 00:23:52.038 --rc geninfo_all_blocks=1 00:23:52.038 --rc geninfo_unexecuted_blocks=1 00:23:52.038 00:23:52.038 ' 00:23:52.038 18:22:22 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:52.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.038 --rc genhtml_branch_coverage=1 00:23:52.038 --rc genhtml_function_coverage=1 00:23:52.038 --rc genhtml_legend=1 00:23:52.038 --rc geninfo_all_blocks=1 00:23:52.038 --rc geninfo_unexecuted_blocks=1 00:23:52.038 00:23:52.038 ' 00:23:52.038 18:22:22 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7c846355-cda6-4b45-925e-50e7b08c3e5f 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7c846355-cda6-4b45-925e-50e7b08c3e5f 00:23:52.038 18:22:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:52.039 18:22:22 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.039 18:22:22 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.039 18:22:22 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.039 18:22:22 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.039 18:22:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.039 18:22:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.039 18:22:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.039 18:22:22 json_config -- paths/export.sh@5 -- # export PATH 00:23:52.039 18:22:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@51 -- # : 0 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:52.039 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:52.039 18:22:22 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:52.039 18:22:22 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:23:52.039 18:22:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:23:52.039 18:22:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:23:52.039 18:22:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:23:52.039 WARNING: No tests are enabled so not running JSON configuration tests 00:23:52.039 18:22:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:23:52.039 18:22:22 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:23:52.039 18:22:22 json_config -- json_config/json_config.sh@28 -- # exit 0 00:23:52.039 00:23:52.039 real 0m0.240s 00:23:52.039 user 0m0.145s 00:23:52.039 sys 0m0.100s 00:23:52.039 18:22:22 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:52.039 ************************************ 00:23:52.039 END TEST json_config 00:23:52.039 ************************************ 00:23:52.039 18:22:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:23:52.299 18:22:22 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:23:52.299 18:22:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:52.299 18:22:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:52.299 18:22:23 -- common/autotest_common.sh@10 -- # set +x 00:23:52.299 ************************************ 00:23:52.299 START TEST json_config_extra_key 00:23:52.299 ************************************ 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:52.299 18:22:23 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:52.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.299 --rc genhtml_branch_coverage=1 00:23:52.299 --rc genhtml_function_coverage=1 00:23:52.299 --rc genhtml_legend=1 00:23:52.299 --rc geninfo_all_blocks=1 00:23:52.299 --rc geninfo_unexecuted_blocks=1 00:23:52.299 00:23:52.299 ' 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:52.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.299 --rc genhtml_branch_coverage=1 00:23:52.299 --rc genhtml_function_coverage=1 00:23:52.299 --rc genhtml_legend=1 00:23:52.299 --rc geninfo_all_blocks=1 00:23:52.299 --rc geninfo_unexecuted_blocks=1 00:23:52.299 00:23:52.299 ' 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:52.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.299 --rc genhtml_branch_coverage=1 00:23:52.299 --rc genhtml_function_coverage=1 00:23:52.299 --rc genhtml_legend=1 00:23:52.299 --rc geninfo_all_blocks=1 00:23:52.299 --rc geninfo_unexecuted_blocks=1 00:23:52.299 00:23:52.299 ' 00:23:52.299 18:22:23 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:52.299 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:52.300 --rc genhtml_branch_coverage=1 00:23:52.300 --rc genhtml_function_coverage=1 00:23:52.300 --rc genhtml_legend=1 00:23:52.300 --rc geninfo_all_blocks=1 00:23:52.300 --rc geninfo_unexecuted_blocks=1 00:23:52.300 00:23:52.300 ' 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7c846355-cda6-4b45-925e-50e7b08c3e5f 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7c846355-cda6-4b45-925e-50e7b08c3e5f 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:52.300 18:22:23 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:23:52.300 18:22:23 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:52.300 18:22:23 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:52.300 18:22:23 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:52.300 18:22:23 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.300 18:22:23 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.300 18:22:23 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.300 18:22:23 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:23:52.300 18:22:23 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:52.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:52.300 18:22:23 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:23:52.300 INFO: launching applications... 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:23:52.300 18:22:23 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57393 00:23:52.300 Waiting for target to run... 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57393 /var/tmp/spdk_tgt.sock 00:23:52.300 18:22:23 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:23:52.301 18:22:23 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57393 ']' 00:23:52.301 18:22:23 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:23:52.301 18:22:23 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:52.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:23:52.301 18:22:23 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:23:52.301 18:22:23 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:52.301 18:22:23 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:23:52.558 [2024-12-06 18:22:23.337617] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:52.558 [2024-12-06 18:22:23.337753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57393 ] 00:23:52.816 [2024-12-06 18:22:23.733944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.074 [2024-12-06 18:22:23.841746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.639 18:22:24 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.639 00:23:53.639 18:22:24 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:23:53.639 INFO: shutting down applications... 00:23:53.639 18:22:24 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:23:53.639 18:22:24 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57393 ]] 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57393 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57393 00:23:53.639 18:22:24 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:54.203 18:22:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:54.203 18:22:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:54.203 18:22:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57393 00:23:54.203 18:22:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:54.768 18:22:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:54.768 18:22:25 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:54.768 18:22:25 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57393 00:23:54.768 18:22:25 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:55.331 18:22:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:55.331 18:22:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:55.331 18:22:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57393 00:23:55.331 18:22:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:55.895 18:22:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:55.895 18:22:26 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:55.895 18:22:26 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57393 00:23:55.895 18:22:26 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:56.154 18:22:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:56.154 18:22:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:56.154 18:22:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57393 00:23:56.154 18:22:27 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:23:56.722 18:22:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:23:56.722 18:22:27 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:23:56.722 18:22:27 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57393 00:23:56.722 18:22:27 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:23:56.722 18:22:27 json_config_extra_key -- json_config/common.sh@43 -- # break 00:23:56.722 SPDK target shutdown done 00:23:56.722 18:22:27 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:23:56.722 18:22:27 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:23:56.722 Success 00:23:56.722 18:22:27 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:23:56.722 00:23:56.722 real 0m4.572s 00:23:56.722 user 0m4.056s 00:23:56.722 sys 0m0.608s 00:23:56.722 18:22:27 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:56.722 ************************************ 00:23:56.722 END TEST json_config_extra_key 00:23:56.722 18:22:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:23:56.722 ************************************ 00:23:56.722 18:22:27 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:56.722 18:22:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:56.722 18:22:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.722 18:22:27 -- common/autotest_common.sh@10 -- # set +x 00:23:56.722 ************************************ 00:23:56.722 START TEST alias_rpc 00:23:56.722 ************************************ 00:23:56.722 18:22:27 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:23:56.981 * Looking for test storage... 00:23:56.981 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@345 -- # : 1 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.981 18:22:27 alias_rpc -- scripts/common.sh@368 -- # return 0 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.981 --rc genhtml_branch_coverage=1 00:23:56.981 --rc genhtml_function_coverage=1 00:23:56.981 --rc genhtml_legend=1 00:23:56.981 --rc geninfo_all_blocks=1 00:23:56.981 --rc geninfo_unexecuted_blocks=1 00:23:56.981 00:23:56.981 ' 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.981 --rc genhtml_branch_coverage=1 00:23:56.981 --rc genhtml_function_coverage=1 00:23:56.981 --rc genhtml_legend=1 00:23:56.981 --rc geninfo_all_blocks=1 00:23:56.981 --rc geninfo_unexecuted_blocks=1 00:23:56.981 00:23:56.981 ' 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.981 --rc genhtml_branch_coverage=1 00:23:56.981 --rc genhtml_function_coverage=1 00:23:56.981 --rc genhtml_legend=1 00:23:56.981 --rc geninfo_all_blocks=1 00:23:56.981 --rc geninfo_unexecuted_blocks=1 00:23:56.981 00:23:56.981 ' 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:56.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.981 --rc genhtml_branch_coverage=1 00:23:56.981 --rc genhtml_function_coverage=1 00:23:56.981 --rc genhtml_legend=1 00:23:56.981 --rc geninfo_all_blocks=1 00:23:56.981 --rc geninfo_unexecuted_blocks=1 00:23:56.981 00:23:56.981 ' 00:23:56.981 18:22:27 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:23:56.981 18:22:27 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57505 00:23:56.981 18:22:27 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:56.981 18:22:27 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57505 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57505 ']' 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:56.981 18:22:27 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:57.240 [2024-12-06 18:22:27.986797] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:23:57.240 [2024-12-06 18:22:27.986919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57505 ] 00:23:57.240 [2024-12-06 18:22:28.168367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.499 [2024-12-06 18:22:28.283744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.435 18:22:29 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.436 18:22:29 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:58.436 18:22:29 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:23:58.695 18:22:29 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57505 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57505 ']' 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57505 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57505 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:58.695 killing process with pid 57505 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57505' 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@973 -- # kill 57505 00:23:58.695 18:22:29 alias_rpc -- common/autotest_common.sh@978 -- # wait 57505 00:24:01.251 ************************************ 00:24:01.251 END TEST alias_rpc 00:24:01.251 ************************************ 00:24:01.251 00:24:01.251 real 0m4.289s 00:24:01.251 user 0m4.228s 00:24:01.251 sys 0m0.633s 00:24:01.251 18:22:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.251 18:22:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:01.251 18:22:31 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:24:01.251 18:22:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:24:01.251 18:22:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:01.251 18:22:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.251 18:22:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.251 ************************************ 00:24:01.251 START TEST spdkcli_tcp 00:24:01.251 ************************************ 00:24:01.251 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:24:01.251 * Looking for test storage... 00:24:01.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:01.251 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:01.251 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:24:01.251 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.526 18:22:32 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.526 --rc genhtml_branch_coverage=1 00:24:01.526 --rc genhtml_function_coverage=1 00:24:01.526 --rc genhtml_legend=1 00:24:01.526 --rc geninfo_all_blocks=1 00:24:01.526 --rc geninfo_unexecuted_blocks=1 00:24:01.526 00:24:01.526 ' 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.526 --rc genhtml_branch_coverage=1 00:24:01.526 --rc genhtml_function_coverage=1 00:24:01.526 --rc genhtml_legend=1 00:24:01.526 --rc geninfo_all_blocks=1 00:24:01.526 --rc geninfo_unexecuted_blocks=1 00:24:01.526 00:24:01.526 ' 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.526 --rc genhtml_branch_coverage=1 00:24:01.526 --rc genhtml_function_coverage=1 00:24:01.526 --rc genhtml_legend=1 00:24:01.526 --rc geninfo_all_blocks=1 00:24:01.526 --rc geninfo_unexecuted_blocks=1 00:24:01.526 00:24:01.526 ' 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.526 --rc genhtml_branch_coverage=1 00:24:01.526 --rc genhtml_function_coverage=1 00:24:01.526 --rc genhtml_legend=1 00:24:01.526 --rc geninfo_all_blocks=1 00:24:01.526 --rc geninfo_unexecuted_blocks=1 00:24:01.526 00:24:01.526 ' 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57617 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57617 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57617 ']' 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.526 18:22:32 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.526 18:22:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:01.526 [2024-12-06 18:22:32.355935] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:01.526 [2024-12-06 18:22:32.357525] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57617 ] 00:24:01.785 [2024-12-06 18:22:32.543097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:01.785 [2024-12-06 18:22:32.672852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.785 [2024-12-06 18:22:32.672886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:02.721 18:22:33 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:02.721 18:22:33 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:24:02.721 18:22:33 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57634 00:24:02.721 18:22:33 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:24:02.721 18:22:33 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:24:02.980 [ 00:24:02.980 "bdev_malloc_delete", 00:24:02.980 "bdev_malloc_create", 00:24:02.980 "bdev_null_resize", 00:24:02.980 "bdev_null_delete", 00:24:02.980 "bdev_null_create", 00:24:02.980 "bdev_nvme_cuse_unregister", 00:24:02.980 "bdev_nvme_cuse_register", 00:24:02.980 "bdev_opal_new_user", 00:24:02.980 "bdev_opal_set_lock_state", 00:24:02.980 "bdev_opal_delete", 00:24:02.980 "bdev_opal_get_info", 00:24:02.980 "bdev_opal_create", 00:24:02.980 "bdev_nvme_opal_revert", 00:24:02.980 "bdev_nvme_opal_init", 00:24:02.980 "bdev_nvme_send_cmd", 00:24:02.980 "bdev_nvme_set_keys", 00:24:02.980 "bdev_nvme_get_path_iostat", 00:24:02.980 "bdev_nvme_get_mdns_discovery_info", 00:24:02.980 "bdev_nvme_stop_mdns_discovery", 00:24:02.980 "bdev_nvme_start_mdns_discovery", 00:24:02.980 "bdev_nvme_set_multipath_policy", 00:24:02.980 "bdev_nvme_set_preferred_path", 00:24:02.980 "bdev_nvme_get_io_paths", 00:24:02.980 "bdev_nvme_remove_error_injection", 00:24:02.980 "bdev_nvme_add_error_injection", 00:24:02.980 "bdev_nvme_get_discovery_info", 00:24:02.980 "bdev_nvme_stop_discovery", 00:24:02.980 "bdev_nvme_start_discovery", 00:24:02.980 "bdev_nvme_get_controller_health_info", 00:24:02.980 "bdev_nvme_disable_controller", 00:24:02.980 "bdev_nvme_enable_controller", 00:24:02.980 "bdev_nvme_reset_controller", 00:24:02.980 "bdev_nvme_get_transport_statistics", 00:24:02.980 "bdev_nvme_apply_firmware", 00:24:02.980 "bdev_nvme_detach_controller", 00:24:02.980 "bdev_nvme_get_controllers", 00:24:02.980 "bdev_nvme_attach_controller", 00:24:02.980 "bdev_nvme_set_hotplug", 00:24:02.980 "bdev_nvme_set_options", 00:24:02.980 "bdev_passthru_delete", 00:24:02.980 "bdev_passthru_create", 00:24:02.980 "bdev_lvol_set_parent_bdev", 00:24:02.980 "bdev_lvol_set_parent", 00:24:02.980 "bdev_lvol_check_shallow_copy", 00:24:02.980 "bdev_lvol_start_shallow_copy", 00:24:02.980 "bdev_lvol_grow_lvstore", 00:24:02.980 "bdev_lvol_get_lvols", 00:24:02.980 "bdev_lvol_get_lvstores", 00:24:02.980 "bdev_lvol_delete", 00:24:02.980 "bdev_lvol_set_read_only", 00:24:02.980 "bdev_lvol_resize", 00:24:02.980 "bdev_lvol_decouple_parent", 00:24:02.980 "bdev_lvol_inflate", 00:24:02.980 "bdev_lvol_rename", 00:24:02.980 "bdev_lvol_clone_bdev", 00:24:02.980 "bdev_lvol_clone", 00:24:02.980 "bdev_lvol_snapshot", 00:24:02.980 "bdev_lvol_create", 00:24:02.980 "bdev_lvol_delete_lvstore", 00:24:02.980 "bdev_lvol_rename_lvstore", 00:24:02.980 "bdev_lvol_create_lvstore", 00:24:02.980 "bdev_raid_set_options", 00:24:02.980 "bdev_raid_remove_base_bdev", 00:24:02.980 "bdev_raid_add_base_bdev", 00:24:02.980 "bdev_raid_delete", 00:24:02.980 "bdev_raid_create", 00:24:02.980 "bdev_raid_get_bdevs", 00:24:02.980 "bdev_error_inject_error", 00:24:02.980 "bdev_error_delete", 00:24:02.980 "bdev_error_create", 00:24:02.980 "bdev_split_delete", 00:24:02.980 "bdev_split_create", 00:24:02.980 "bdev_delay_delete", 00:24:02.980 "bdev_delay_create", 00:24:02.980 "bdev_delay_update_latency", 00:24:02.980 "bdev_zone_block_delete", 00:24:02.980 "bdev_zone_block_create", 00:24:02.980 "blobfs_create", 00:24:02.980 "blobfs_detect", 00:24:02.980 "blobfs_set_cache_size", 00:24:02.980 "bdev_aio_delete", 00:24:02.980 "bdev_aio_rescan", 00:24:02.980 "bdev_aio_create", 00:24:02.980 "bdev_ftl_set_property", 00:24:02.980 "bdev_ftl_get_properties", 00:24:02.980 "bdev_ftl_get_stats", 00:24:02.980 "bdev_ftl_unmap", 00:24:02.980 "bdev_ftl_unload", 00:24:02.980 "bdev_ftl_delete", 00:24:02.980 "bdev_ftl_load", 00:24:02.980 "bdev_ftl_create", 00:24:02.980 "bdev_virtio_attach_controller", 00:24:02.980 "bdev_virtio_scsi_get_devices", 00:24:02.980 "bdev_virtio_detach_controller", 00:24:02.980 "bdev_virtio_blk_set_hotplug", 00:24:02.980 "bdev_iscsi_delete", 00:24:02.980 "bdev_iscsi_create", 00:24:02.980 "bdev_iscsi_set_options", 00:24:02.980 "accel_error_inject_error", 00:24:02.980 "ioat_scan_accel_module", 00:24:02.980 "dsa_scan_accel_module", 00:24:02.980 "iaa_scan_accel_module", 00:24:02.980 "keyring_file_remove_key", 00:24:02.980 "keyring_file_add_key", 00:24:02.980 "keyring_linux_set_options", 00:24:02.980 "fsdev_aio_delete", 00:24:02.980 "fsdev_aio_create", 00:24:02.980 "iscsi_get_histogram", 00:24:02.980 "iscsi_enable_histogram", 00:24:02.980 "iscsi_set_options", 00:24:02.980 "iscsi_get_auth_groups", 00:24:02.980 "iscsi_auth_group_remove_secret", 00:24:02.980 "iscsi_auth_group_add_secret", 00:24:02.980 "iscsi_delete_auth_group", 00:24:02.980 "iscsi_create_auth_group", 00:24:02.980 "iscsi_set_discovery_auth", 00:24:02.980 "iscsi_get_options", 00:24:02.980 "iscsi_target_node_request_logout", 00:24:02.980 "iscsi_target_node_set_redirect", 00:24:02.980 "iscsi_target_node_set_auth", 00:24:02.980 "iscsi_target_node_add_lun", 00:24:02.980 "iscsi_get_stats", 00:24:02.980 "iscsi_get_connections", 00:24:02.980 "iscsi_portal_group_set_auth", 00:24:02.980 "iscsi_start_portal_group", 00:24:02.980 "iscsi_delete_portal_group", 00:24:02.980 "iscsi_create_portal_group", 00:24:02.980 "iscsi_get_portal_groups", 00:24:02.980 "iscsi_delete_target_node", 00:24:02.980 "iscsi_target_node_remove_pg_ig_maps", 00:24:02.980 "iscsi_target_node_add_pg_ig_maps", 00:24:02.980 "iscsi_create_target_node", 00:24:02.980 "iscsi_get_target_nodes", 00:24:02.980 "iscsi_delete_initiator_group", 00:24:02.980 "iscsi_initiator_group_remove_initiators", 00:24:02.980 "iscsi_initiator_group_add_initiators", 00:24:02.980 "iscsi_create_initiator_group", 00:24:02.980 "iscsi_get_initiator_groups", 00:24:02.980 "nvmf_set_crdt", 00:24:02.980 "nvmf_set_config", 00:24:02.980 "nvmf_set_max_subsystems", 00:24:02.980 "nvmf_stop_mdns_prr", 00:24:02.980 "nvmf_publish_mdns_prr", 00:24:02.980 "nvmf_subsystem_get_listeners", 00:24:02.980 "nvmf_subsystem_get_qpairs", 00:24:02.980 "nvmf_subsystem_get_controllers", 00:24:02.980 "nvmf_get_stats", 00:24:02.980 "nvmf_get_transports", 00:24:02.981 "nvmf_create_transport", 00:24:02.981 "nvmf_get_targets", 00:24:02.981 "nvmf_delete_target", 00:24:02.981 "nvmf_create_target", 00:24:02.981 "nvmf_subsystem_allow_any_host", 00:24:02.981 "nvmf_subsystem_set_keys", 00:24:02.981 "nvmf_subsystem_remove_host", 00:24:02.981 "nvmf_subsystem_add_host", 00:24:02.981 "nvmf_ns_remove_host", 00:24:02.981 "nvmf_ns_add_host", 00:24:02.981 "nvmf_subsystem_remove_ns", 00:24:02.981 "nvmf_subsystem_set_ns_ana_group", 00:24:02.981 "nvmf_subsystem_add_ns", 00:24:02.981 "nvmf_subsystem_listener_set_ana_state", 00:24:02.981 "nvmf_discovery_get_referrals", 00:24:02.981 "nvmf_discovery_remove_referral", 00:24:02.981 "nvmf_discovery_add_referral", 00:24:02.981 "nvmf_subsystem_remove_listener", 00:24:02.981 "nvmf_subsystem_add_listener", 00:24:02.981 "nvmf_delete_subsystem", 00:24:02.981 "nvmf_create_subsystem", 00:24:02.981 "nvmf_get_subsystems", 00:24:02.981 "env_dpdk_get_mem_stats", 00:24:02.981 "nbd_get_disks", 00:24:02.981 "nbd_stop_disk", 00:24:02.981 "nbd_start_disk", 00:24:02.981 "ublk_recover_disk", 00:24:02.981 "ublk_get_disks", 00:24:02.981 "ublk_stop_disk", 00:24:02.981 "ublk_start_disk", 00:24:02.981 "ublk_destroy_target", 00:24:02.981 "ublk_create_target", 00:24:02.981 "virtio_blk_create_transport", 00:24:02.981 "virtio_blk_get_transports", 00:24:02.981 "vhost_controller_set_coalescing", 00:24:02.981 "vhost_get_controllers", 00:24:02.981 "vhost_delete_controller", 00:24:02.981 "vhost_create_blk_controller", 00:24:02.981 "vhost_scsi_controller_remove_target", 00:24:02.981 "vhost_scsi_controller_add_target", 00:24:02.981 "vhost_start_scsi_controller", 00:24:02.981 "vhost_create_scsi_controller", 00:24:02.981 "thread_set_cpumask", 00:24:02.981 "scheduler_set_options", 00:24:02.981 "framework_get_governor", 00:24:02.981 "framework_get_scheduler", 00:24:02.981 "framework_set_scheduler", 00:24:02.981 "framework_get_reactors", 00:24:02.981 "thread_get_io_channels", 00:24:02.981 "thread_get_pollers", 00:24:02.981 "thread_get_stats", 00:24:02.981 "framework_monitor_context_switch", 00:24:02.981 "spdk_kill_instance", 00:24:02.981 "log_enable_timestamps", 00:24:02.981 "log_get_flags", 00:24:02.981 "log_clear_flag", 00:24:02.981 "log_set_flag", 00:24:02.981 "log_get_level", 00:24:02.981 "log_set_level", 00:24:02.981 "log_get_print_level", 00:24:02.981 "log_set_print_level", 00:24:02.981 "framework_enable_cpumask_locks", 00:24:02.981 "framework_disable_cpumask_locks", 00:24:02.981 "framework_wait_init", 00:24:02.981 "framework_start_init", 00:24:02.981 "scsi_get_devices", 00:24:02.981 "bdev_get_histogram", 00:24:02.981 "bdev_enable_histogram", 00:24:02.981 "bdev_set_qos_limit", 00:24:02.981 "bdev_set_qd_sampling_period", 00:24:02.981 "bdev_get_bdevs", 00:24:02.981 "bdev_reset_iostat", 00:24:02.981 "bdev_get_iostat", 00:24:02.981 "bdev_examine", 00:24:02.981 "bdev_wait_for_examine", 00:24:02.981 "bdev_set_options", 00:24:02.981 "accel_get_stats", 00:24:02.981 "accel_set_options", 00:24:02.981 "accel_set_driver", 00:24:02.981 "accel_crypto_key_destroy", 00:24:02.981 "accel_crypto_keys_get", 00:24:02.981 "accel_crypto_key_create", 00:24:02.981 "accel_assign_opc", 00:24:02.981 "accel_get_module_info", 00:24:02.981 "accel_get_opc_assignments", 00:24:02.981 "vmd_rescan", 00:24:02.981 "vmd_remove_device", 00:24:02.981 "vmd_enable", 00:24:02.981 "sock_get_default_impl", 00:24:02.981 "sock_set_default_impl", 00:24:02.981 "sock_impl_set_options", 00:24:02.981 "sock_impl_get_options", 00:24:02.981 "iobuf_get_stats", 00:24:02.981 "iobuf_set_options", 00:24:02.981 "keyring_get_keys", 00:24:02.981 "framework_get_pci_devices", 00:24:02.981 "framework_get_config", 00:24:02.981 "framework_get_subsystems", 00:24:02.981 "fsdev_set_opts", 00:24:02.981 "fsdev_get_opts", 00:24:02.981 "trace_get_info", 00:24:02.981 "trace_get_tpoint_group_mask", 00:24:02.981 "trace_disable_tpoint_group", 00:24:02.981 "trace_enable_tpoint_group", 00:24:02.981 "trace_clear_tpoint_mask", 00:24:02.981 "trace_set_tpoint_mask", 00:24:02.981 "notify_get_notifications", 00:24:02.981 "notify_get_types", 00:24:02.981 "spdk_get_version", 00:24:02.981 "rpc_get_methods" 00:24:02.981 ] 00:24:02.981 18:22:33 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:24:02.981 18:22:33 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:02.981 18:22:33 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:02.981 18:22:33 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:02.981 18:22:33 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57617 00:24:02.981 18:22:33 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57617 ']' 00:24:02.981 18:22:33 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57617 00:24:02.981 18:22:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:24:02.981 18:22:33 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.981 18:22:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57617 00:24:03.240 killing process with pid 57617 00:24:03.240 18:22:33 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:03.240 18:22:33 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:03.240 18:22:33 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57617' 00:24:03.240 18:22:33 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57617 00:24:03.240 18:22:33 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57617 00:24:05.773 00:24:05.773 real 0m4.499s 00:24:05.773 user 0m8.002s 00:24:05.773 sys 0m0.706s 00:24:05.773 18:22:36 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.773 18:22:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:05.773 ************************************ 00:24:05.773 END TEST spdkcli_tcp 00:24:05.773 ************************************ 00:24:05.773 18:22:36 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:24:05.773 18:22:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:05.773 18:22:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.773 18:22:36 -- common/autotest_common.sh@10 -- # set +x 00:24:05.773 ************************************ 00:24:05.773 START TEST dpdk_mem_utility 00:24:05.773 ************************************ 00:24:05.773 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:24:05.773 * Looking for test storage... 00:24:05.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:24:06.032 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:06.032 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:06.032 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:24:06.032 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.032 18:22:36 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:24:06.032 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.032 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:06.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.032 --rc genhtml_branch_coverage=1 00:24:06.032 --rc genhtml_function_coverage=1 00:24:06.032 --rc genhtml_legend=1 00:24:06.032 --rc geninfo_all_blocks=1 00:24:06.032 --rc geninfo_unexecuted_blocks=1 00:24:06.032 00:24:06.032 ' 00:24:06.032 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:06.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.032 --rc genhtml_branch_coverage=1 00:24:06.032 --rc genhtml_function_coverage=1 00:24:06.032 --rc genhtml_legend=1 00:24:06.032 --rc geninfo_all_blocks=1 00:24:06.032 --rc geninfo_unexecuted_blocks=1 00:24:06.032 00:24:06.032 ' 00:24:06.032 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:06.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.033 --rc genhtml_branch_coverage=1 00:24:06.033 --rc genhtml_function_coverage=1 00:24:06.033 --rc genhtml_legend=1 00:24:06.033 --rc geninfo_all_blocks=1 00:24:06.033 --rc geninfo_unexecuted_blocks=1 00:24:06.033 00:24:06.033 ' 00:24:06.033 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:06.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.033 --rc genhtml_branch_coverage=1 00:24:06.033 --rc genhtml_function_coverage=1 00:24:06.033 --rc genhtml_legend=1 00:24:06.033 --rc geninfo_all_blocks=1 00:24:06.033 --rc geninfo_unexecuted_blocks=1 00:24:06.033 00:24:06.033 ' 00:24:06.033 18:22:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:24:06.033 18:22:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57739 00:24:06.033 18:22:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:24:06.033 18:22:36 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57739 00:24:06.033 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57739 ']' 00:24:06.033 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.033 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:06.033 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.033 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:06.033 18:22:36 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:24:06.033 [2024-12-06 18:22:36.952544] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:06.033 [2024-12-06 18:22:36.952882] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57739 ] 00:24:06.292 [2024-12-06 18:22:37.131935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:06.549 [2024-12-06 18:22:37.265027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.484 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.484 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:24:07.484 18:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:24:07.484 18:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:24:07.484 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.484 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:24:07.484 { 00:24:07.484 "filename": "/tmp/spdk_mem_dump.txt" 00:24:07.484 } 00:24:07.484 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.484 18:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:24:07.484 DPDK memory size 824.000000 MiB in 1 heap(s) 00:24:07.484 1 heaps totaling size 824.000000 MiB 00:24:07.484 size: 824.000000 MiB heap id: 0 00:24:07.484 end heaps---------- 00:24:07.484 9 mempools totaling size 603.782043 MiB 00:24:07.484 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:24:07.484 size: 158.602051 MiB name: PDU_data_out_Pool 00:24:07.484 size: 100.555481 MiB name: bdev_io_57739 00:24:07.484 size: 50.003479 MiB name: msgpool_57739 00:24:07.484 size: 36.509338 MiB name: fsdev_io_57739 00:24:07.484 size: 21.763794 MiB name: PDU_Pool 00:24:07.484 size: 19.513306 MiB name: SCSI_TASK_Pool 00:24:07.484 size: 4.133484 MiB name: evtpool_57739 00:24:07.484 size: 0.026123 MiB name: Session_Pool 00:24:07.484 end mempools------- 00:24:07.484 6 memzones totaling size 4.142822 MiB 00:24:07.484 size: 1.000366 MiB name: RG_ring_0_57739 00:24:07.484 size: 1.000366 MiB name: RG_ring_1_57739 00:24:07.484 size: 1.000366 MiB name: RG_ring_4_57739 00:24:07.484 size: 1.000366 MiB name: RG_ring_5_57739 00:24:07.484 size: 0.125366 MiB name: RG_ring_2_57739 00:24:07.484 size: 0.015991 MiB name: RG_ring_3_57739 00:24:07.484 end memzones------- 00:24:07.484 18:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:24:07.484 heap id: 0 total size: 824.000000 MiB number of busy elements: 319 number of free elements: 18 00:24:07.484 list of free elements. size: 16.780396 MiB 00:24:07.484 element at address: 0x200006400000 with size: 1.995972 MiB 00:24:07.484 element at address: 0x20000a600000 with size: 1.995972 MiB 00:24:07.484 element at address: 0x200003e00000 with size: 1.991028 MiB 00:24:07.484 element at address: 0x200019500040 with size: 0.999939 MiB 00:24:07.484 element at address: 0x200019900040 with size: 0.999939 MiB 00:24:07.484 element at address: 0x200019a00000 with size: 0.999084 MiB 00:24:07.484 element at address: 0x200032600000 with size: 0.994324 MiB 00:24:07.484 element at address: 0x200000400000 with size: 0.992004 MiB 00:24:07.484 element at address: 0x200019200000 with size: 0.959656 MiB 00:24:07.484 element at address: 0x200019d00040 with size: 0.936401 MiB 00:24:07.484 element at address: 0x200000200000 with size: 0.716980 MiB 00:24:07.484 element at address: 0x20001b400000 with size: 0.561951 MiB 00:24:07.484 element at address: 0x200000c00000 with size: 0.489197 MiB 00:24:07.484 element at address: 0x200019600000 with size: 0.487976 MiB 00:24:07.484 element at address: 0x200019e00000 with size: 0.485413 MiB 00:24:07.484 element at address: 0x200012c00000 with size: 0.433228 MiB 00:24:07.484 element at address: 0x200028800000 with size: 0.390442 MiB 00:24:07.484 element at address: 0x200000800000 with size: 0.350891 MiB 00:24:07.484 list of standard malloc elements. size: 199.288696 MiB 00:24:07.484 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:24:07.484 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:24:07.484 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:24:07.484 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:24:07.484 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:24:07.484 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:24:07.484 element at address: 0x200019deff40 with size: 0.062683 MiB 00:24:07.484 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:24:07.484 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:24:07.484 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:24:07.484 element at address: 0x200012bff040 with size: 0.000305 MiB 00:24:07.484 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:24:07.484 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200000cff000 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff180 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff280 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff380 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff480 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff580 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff680 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff780 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff880 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bff980 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200019affc40 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:24:07.485 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:24:07.486 element at address: 0x200028863f40 with size: 0.000244 MiB 00:24:07.486 element at address: 0x200028864040 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886af80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b080 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b180 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b280 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b380 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b480 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b580 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b680 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b780 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b880 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886b980 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886be80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c080 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c180 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c280 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c380 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c480 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c580 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c680 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c780 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c880 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886c980 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d080 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d180 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d280 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d380 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d480 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d580 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d680 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d780 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d880 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886d980 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886da80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886db80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886de80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886df80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e080 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e180 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e280 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e380 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e480 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e580 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e680 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e780 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e880 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886e980 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f080 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f180 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f280 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f380 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f480 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f580 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f680 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f780 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f880 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886f980 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:24:07.486 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:24:07.486 list of memzone associated elements. size: 607.930908 MiB 00:24:07.486 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:24:07.486 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:24:07.486 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:24:07.486 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:24:07.486 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:24:07.486 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57739_0 00:24:07.486 element at address: 0x200000dff340 with size: 48.003113 MiB 00:24:07.486 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57739_0 00:24:07.486 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:24:07.486 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57739_0 00:24:07.486 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:24:07.486 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:24:07.486 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:24:07.487 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:24:07.487 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:24:07.487 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57739_0 00:24:07.487 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:24:07.487 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57739 00:24:07.487 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:24:07.487 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57739 00:24:07.487 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:24:07.487 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:24:07.487 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:24:07.487 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:24:07.487 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:24:07.487 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:24:07.487 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:24:07.487 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:24:07.487 element at address: 0x200000cff100 with size: 1.000549 MiB 00:24:07.487 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57739 00:24:07.487 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:24:07.487 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57739 00:24:07.487 element at address: 0x200019affd40 with size: 1.000549 MiB 00:24:07.487 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57739 00:24:07.487 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:24:07.487 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57739 00:24:07.487 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:24:07.487 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57739 00:24:07.487 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:24:07.487 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57739 00:24:07.487 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:24:07.487 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:24:07.487 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:24:07.487 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:24:07.487 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:24:07.487 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:24:07.487 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:24:07.487 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57739 00:24:07.487 element at address: 0x20000085df80 with size: 0.125549 MiB 00:24:07.487 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57739 00:24:07.487 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:24:07.487 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:24:07.487 element at address: 0x200028864140 with size: 0.023804 MiB 00:24:07.487 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:24:07.487 element at address: 0x200000859d40 with size: 0.016174 MiB 00:24:07.487 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57739 00:24:07.487 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:24:07.487 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:24:07.487 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:24:07.487 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57739 00:24:07.487 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:24:07.487 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57739 00:24:07.487 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:24:07.487 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57739 00:24:07.487 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:24:07.487 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:24:07.487 18:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:24:07.487 18:22:38 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57739 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57739 ']' 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57739 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57739 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57739' 00:24:07.487 killing process with pid 57739 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57739 00:24:07.487 18:22:38 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57739 00:24:10.017 ************************************ 00:24:10.017 END TEST dpdk_mem_utility 00:24:10.017 ************************************ 00:24:10.017 00:24:10.017 real 0m4.191s 00:24:10.017 user 0m4.096s 00:24:10.017 sys 0m0.633s 00:24:10.017 18:22:40 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.017 18:22:40 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:24:10.018 18:22:40 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:24:10.018 18:22:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:10.018 18:22:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.018 18:22:40 -- common/autotest_common.sh@10 -- # set +x 00:24:10.018 ************************************ 00:24:10.018 START TEST event 00:24:10.018 ************************************ 00:24:10.018 18:22:40 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:24:10.276 * Looking for test storage... 00:24:10.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:24:10.276 18:22:40 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.276 18:22:40 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.276 18:22:40 event -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.276 18:22:41 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.276 18:22:41 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.276 18:22:41 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.276 18:22:41 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.276 18:22:41 event -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.276 18:22:41 event -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.276 18:22:41 event -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.276 18:22:41 event -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.276 18:22:41 event -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.276 18:22:41 event -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.276 18:22:41 event -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.276 18:22:41 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.276 18:22:41 event -- scripts/common.sh@344 -- # case "$op" in 00:24:10.276 18:22:41 event -- scripts/common.sh@345 -- # : 1 00:24:10.276 18:22:41 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.276 18:22:41 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.276 18:22:41 event -- scripts/common.sh@365 -- # decimal 1 00:24:10.276 18:22:41 event -- scripts/common.sh@353 -- # local d=1 00:24:10.276 18:22:41 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.276 18:22:41 event -- scripts/common.sh@355 -- # echo 1 00:24:10.276 18:22:41 event -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.276 18:22:41 event -- scripts/common.sh@366 -- # decimal 2 00:24:10.276 18:22:41 event -- scripts/common.sh@353 -- # local d=2 00:24:10.276 18:22:41 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.276 18:22:41 event -- scripts/common.sh@355 -- # echo 2 00:24:10.276 18:22:41 event -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.276 18:22:41 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.276 18:22:41 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.276 18:22:41 event -- scripts/common.sh@368 -- # return 0 00:24:10.276 18:22:41 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.276 18:22:41 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.276 --rc genhtml_branch_coverage=1 00:24:10.276 --rc genhtml_function_coverage=1 00:24:10.276 --rc genhtml_legend=1 00:24:10.276 --rc geninfo_all_blocks=1 00:24:10.276 --rc geninfo_unexecuted_blocks=1 00:24:10.276 00:24:10.276 ' 00:24:10.276 18:22:41 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.276 --rc genhtml_branch_coverage=1 00:24:10.276 --rc genhtml_function_coverage=1 00:24:10.276 --rc genhtml_legend=1 00:24:10.276 --rc geninfo_all_blocks=1 00:24:10.276 --rc geninfo_unexecuted_blocks=1 00:24:10.276 00:24:10.276 ' 00:24:10.276 18:22:41 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.276 --rc genhtml_branch_coverage=1 00:24:10.276 --rc genhtml_function_coverage=1 00:24:10.276 --rc genhtml_legend=1 00:24:10.276 --rc geninfo_all_blocks=1 00:24:10.276 --rc geninfo_unexecuted_blocks=1 00:24:10.276 00:24:10.276 ' 00:24:10.276 18:22:41 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.276 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.276 --rc genhtml_branch_coverage=1 00:24:10.276 --rc genhtml_function_coverage=1 00:24:10.276 --rc genhtml_legend=1 00:24:10.276 --rc geninfo_all_blocks=1 00:24:10.276 --rc geninfo_unexecuted_blocks=1 00:24:10.276 00:24:10.276 ' 00:24:10.276 18:22:41 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:10.276 18:22:41 event -- bdev/nbd_common.sh@6 -- # set -e 00:24:10.276 18:22:41 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:24:10.276 18:22:41 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:24:10.276 18:22:41 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.276 18:22:41 event -- common/autotest_common.sh@10 -- # set +x 00:24:10.276 ************************************ 00:24:10.276 START TEST event_perf 00:24:10.276 ************************************ 00:24:10.276 18:22:41 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:24:10.276 Running I/O for 1 seconds...[2024-12-06 18:22:41.134746] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:10.276 [2024-12-06 18:22:41.134962] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57847 ] 00:24:10.534 [2024-12-06 18:22:41.316976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:10.534 [2024-12-06 18:22:41.437280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:10.534 [2024-12-06 18:22:41.437327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:10.534 [2024-12-06 18:22:41.437474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:10.534 Running I/O for 1 seconds...[2024-12-06 18:22:41.437672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.908 00:24:11.908 lcore 0: 205403 00:24:11.908 lcore 1: 205405 00:24:11.908 lcore 2: 205407 00:24:11.908 lcore 3: 205410 00:24:11.908 done. 00:24:11.908 00:24:11.908 real 0m1.597s 00:24:11.908 user 0m4.328s 00:24:11.908 sys 0m0.140s 00:24:11.908 18:22:42 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:11.908 ************************************ 00:24:11.908 END TEST event_perf 00:24:11.909 ************************************ 00:24:11.909 18:22:42 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:24:11.909 18:22:42 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:24:11.909 18:22:42 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:11.909 18:22:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:11.909 18:22:42 event -- common/autotest_common.sh@10 -- # set +x 00:24:11.909 ************************************ 00:24:11.909 START TEST event_reactor 00:24:11.909 ************************************ 00:24:11.909 18:22:42 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:24:11.909 [2024-12-06 18:22:42.794105] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:11.909 [2024-12-06 18:22:42.794291] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57892 ] 00:24:12.178 [2024-12-06 18:22:42.991870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:12.178 [2024-12-06 18:22:43.107629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.568 test_start 00:24:13.568 oneshot 00:24:13.568 tick 100 00:24:13.568 tick 100 00:24:13.568 tick 250 00:24:13.568 tick 100 00:24:13.568 tick 100 00:24:13.568 tick 100 00:24:13.568 tick 250 00:24:13.568 tick 500 00:24:13.568 tick 100 00:24:13.568 tick 100 00:24:13.568 tick 250 00:24:13.568 tick 100 00:24:13.568 tick 100 00:24:13.568 test_end 00:24:13.568 00:24:13.568 real 0m1.588s 00:24:13.568 user 0m1.362s 00:24:13.568 sys 0m0.116s 00:24:13.568 ************************************ 00:24:13.568 END TEST event_reactor 00:24:13.568 ************************************ 00:24:13.568 18:22:44 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.568 18:22:44 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:24:13.568 18:22:44 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:24:13.568 18:22:44 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:24:13.568 18:22:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.568 18:22:44 event -- common/autotest_common.sh@10 -- # set +x 00:24:13.568 ************************************ 00:24:13.568 START TEST event_reactor_perf 00:24:13.568 ************************************ 00:24:13.568 18:22:44 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:24:13.568 [2024-12-06 18:22:44.438379] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:13.568 [2024-12-06 18:22:44.438653] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57929 ] 00:24:13.827 [2024-12-06 18:22:44.620184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.827 [2024-12-06 18:22:44.737813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.202 test_start 00:24:15.202 test_end 00:24:15.202 Performance: 377468 events per second 00:24:15.202 00:24:15.202 real 0m1.569s 00:24:15.202 user 0m1.356s 00:24:15.202 sys 0m0.104s 00:24:15.202 18:22:45 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:15.202 18:22:45 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:24:15.202 ************************************ 00:24:15.202 END TEST event_reactor_perf 00:24:15.202 ************************************ 00:24:15.202 18:22:46 event -- event/event.sh@49 -- # uname -s 00:24:15.202 18:22:46 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:24:15.202 18:22:46 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:24:15.202 18:22:46 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:15.202 18:22:46 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:15.202 18:22:46 event -- common/autotest_common.sh@10 -- # set +x 00:24:15.202 ************************************ 00:24:15.202 START TEST event_scheduler 00:24:15.202 ************************************ 00:24:15.202 18:22:46 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:24:15.202 * Looking for test storage... 00:24:15.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:15.461 18:22:46 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:15.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.461 --rc genhtml_branch_coverage=1 00:24:15.461 --rc genhtml_function_coverage=1 00:24:15.461 --rc genhtml_legend=1 00:24:15.461 --rc geninfo_all_blocks=1 00:24:15.461 --rc geninfo_unexecuted_blocks=1 00:24:15.461 00:24:15.461 ' 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:15.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.461 --rc genhtml_branch_coverage=1 00:24:15.461 --rc genhtml_function_coverage=1 00:24:15.461 --rc genhtml_legend=1 00:24:15.461 --rc geninfo_all_blocks=1 00:24:15.461 --rc geninfo_unexecuted_blocks=1 00:24:15.461 00:24:15.461 ' 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:15.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.461 --rc genhtml_branch_coverage=1 00:24:15.461 --rc genhtml_function_coverage=1 00:24:15.461 --rc genhtml_legend=1 00:24:15.461 --rc geninfo_all_blocks=1 00:24:15.461 --rc geninfo_unexecuted_blocks=1 00:24:15.461 00:24:15.461 ' 00:24:15.461 18:22:46 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:15.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:15.461 --rc genhtml_branch_coverage=1 00:24:15.461 --rc genhtml_function_coverage=1 00:24:15.461 --rc genhtml_legend=1 00:24:15.461 --rc geninfo_all_blocks=1 00:24:15.461 --rc geninfo_unexecuted_blocks=1 00:24:15.461 00:24:15.461 ' 00:24:15.461 18:22:46 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:24:15.461 18:22:46 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=57999 00:24:15.461 18:22:46 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:24:15.462 18:22:46 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:24:15.462 18:22:46 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 57999 00:24:15.462 18:22:46 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 57999 ']' 00:24:15.462 18:22:46 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.462 18:22:46 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.462 18:22:46 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.462 18:22:46 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.462 18:22:46 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:15.462 [2024-12-06 18:22:46.339272] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:15.462 [2024-12-06 18:22:46.339594] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57999 ] 00:24:15.720 [2024-12-06 18:22:46.522706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:15.720 [2024-12-06 18:22:46.651281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.720 [2024-12-06 18:22:46.651453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:15.720 [2024-12-06 18:22:46.651550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.720 [2024-12-06 18:22:46.651589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:16.288 18:22:47 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.288 18:22:47 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:24:16.288 18:22:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:24:16.288 18:22:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.288 18:22:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:16.288 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:16.288 POWER: Cannot set governor of lcore 0 to userspace 00:24:16.288 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:16.288 POWER: Cannot set governor of lcore 0 to performance 00:24:16.288 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:16.288 POWER: Cannot set governor of lcore 0 to userspace 00:24:16.288 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:24:16.288 POWER: Cannot set governor of lcore 0 to userspace 00:24:16.288 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:24:16.288 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:24:16.288 POWER: Unable to set Power Management Environment for lcore 0 00:24:16.288 [2024-12-06 18:22:47.215215] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:24:16.288 [2024-12-06 18:22:47.215436] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:24:16.288 [2024-12-06 18:22:47.215731] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:24:16.288 [2024-12-06 18:22:47.215914] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:24:16.288 [2024-12-06 18:22:47.216321] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:24:16.288 [2024-12-06 18:22:47.216510] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:24:16.288 18:22:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.288 18:22:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:24:16.288 18:22:47 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.288 18:22:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 [2024-12-06 18:22:47.578157] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:24:16.856 18:22:47 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:24:16.856 18:22:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:16.856 18:22:47 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 ************************************ 00:24:16.856 START TEST scheduler_create_thread 00:24:16.856 ************************************ 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 2 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 3 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 4 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 5 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 6 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 7 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 8 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 9 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 10 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.856 18:22:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:18.235 18:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.235 18:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:24:18.235 18:22:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:24:18.235 18:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.235 18:22:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:19.610 ************************************ 00:24:19.610 END TEST scheduler_create_thread 00:24:19.610 ************************************ 00:24:19.610 18:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.610 00:24:19.610 real 0m2.619s 00:24:19.610 user 0m0.027s 00:24:19.610 sys 0m0.008s 00:24:19.610 18:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:19.610 18:22:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:24:19.610 18:22:50 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:24:19.610 18:22:50 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 57999 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 57999 ']' 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 57999 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57999 00:24:19.610 killing process with pid 57999 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57999' 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 57999 00:24:19.610 18:22:50 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 57999 00:24:19.867 [2024-12-06 18:22:50.691767] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:24:21.244 00:24:21.244 real 0m5.829s 00:24:21.244 user 0m9.859s 00:24:21.244 sys 0m0.614s 00:24:21.244 ************************************ 00:24:21.244 END TEST event_scheduler 00:24:21.244 ************************************ 00:24:21.244 18:22:51 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.244 18:22:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:24:21.244 18:22:51 event -- event/event.sh@51 -- # modprobe -n nbd 00:24:21.244 18:22:51 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:24:21.244 18:22:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:21.244 18:22:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.244 18:22:51 event -- common/autotest_common.sh@10 -- # set +x 00:24:21.244 ************************************ 00:24:21.244 START TEST app_repeat 00:24:21.244 ************************************ 00:24:21.244 18:22:51 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58111 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:24:21.244 Process app_repeat pid: 58111 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58111' 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:24:21.244 spdk_app_start Round 0 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:24:21.244 18:22:51 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58111 /var/tmp/spdk-nbd.sock 00:24:21.244 18:22:51 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58111 ']' 00:24:21.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:21.244 18:22:51 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:21.244 18:22:51 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.244 18:22:51 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:21.244 18:22:51 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.244 18:22:51 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:21.244 [2024-12-06 18:22:52.023229] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:21.244 [2024-12-06 18:22:52.023347] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58111 ] 00:24:21.244 [2024-12-06 18:22:52.188623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:21.503 [2024-12-06 18:22:52.305250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:21.503 [2024-12-06 18:22:52.305294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.108 18:22:52 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.108 18:22:52 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:24:22.108 18:22:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:22.367 Malloc0 00:24:22.367 18:22:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:22.627 Malloc1 00:24:22.627 18:22:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:22.627 18:22:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:24:22.886 /dev/nbd0 00:24:22.886 18:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:22.886 18:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:22.886 1+0 records in 00:24:22.886 1+0 records out 00:24:22.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259839 s, 15.8 MB/s 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:22.886 18:22:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:24:22.886 18:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:22.886 18:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:22.886 18:22:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:23.144 /dev/nbd1 00:24:23.144 18:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:23.144 18:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:23.144 18:22:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:23.144 18:22:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:24:23.144 18:22:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:23.144 18:22:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:23.144 18:22:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:23.145 1+0 records in 00:24:23.145 1+0 records out 00:24:23.145 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000263036 s, 15.6 MB/s 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:23.145 18:22:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:24:23.145 18:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:23.145 18:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:23.145 18:22:53 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:23.145 18:22:53 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:23.145 18:22:53 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:23.403 { 00:24:23.403 "nbd_device": "/dev/nbd0", 00:24:23.403 "bdev_name": "Malloc0" 00:24:23.403 }, 00:24:23.403 { 00:24:23.403 "nbd_device": "/dev/nbd1", 00:24:23.403 "bdev_name": "Malloc1" 00:24:23.403 } 00:24:23.403 ]' 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:23.403 { 00:24:23.403 "nbd_device": "/dev/nbd0", 00:24:23.403 "bdev_name": "Malloc0" 00:24:23.403 }, 00:24:23.403 { 00:24:23.403 "nbd_device": "/dev/nbd1", 00:24:23.403 "bdev_name": "Malloc1" 00:24:23.403 } 00:24:23.403 ]' 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:23.403 /dev/nbd1' 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:23.403 /dev/nbd1' 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:24:23.403 18:22:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:23.404 256+0 records in 00:24:23.404 256+0 records out 00:24:23.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00428506 s, 245 MB/s 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:23.404 256+0 records in 00:24:23.404 256+0 records out 00:24:23.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303476 s, 34.6 MB/s 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:23.404 256+0 records in 00:24:23.404 256+0 records out 00:24:23.404 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0381469 s, 27.5 MB/s 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:23.404 18:22:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:23.662 18:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:23.920 18:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:24.178 18:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:24.178 18:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:24.178 18:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:24.435 18:22:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:24:24.435 18:22:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:24.692 18:22:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:24:26.063 [2024-12-06 18:22:56.745699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:26.063 [2024-12-06 18:22:56.857427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.063 [2024-12-06 18:22:56.857428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.322 [2024-12-06 18:22:57.046076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:26.322 [2024-12-06 18:22:57.046171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:24:27.697 spdk_app_start Round 1 00:24:27.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:27.697 18:22:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:24:27.697 18:22:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:24:27.697 18:22:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58111 /var/tmp/spdk-nbd.sock 00:24:27.697 18:22:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58111 ']' 00:24:27.697 18:22:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:27.697 18:22:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:27.697 18:22:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:27.697 18:22:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:27.697 18:22:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:27.956 18:22:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:27.956 18:22:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:24:27.956 18:22:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:28.213 Malloc0 00:24:28.214 18:22:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:28.780 Malloc1 00:24:28.780 18:22:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:28.780 18:22:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:24:28.780 /dev/nbd0 00:24:29.037 18:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:29.037 18:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:29.037 1+0 records in 00:24:29.037 1+0 records out 00:24:29.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403267 s, 10.2 MB/s 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:29.037 18:22:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:24:29.037 18:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.037 18:22:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.037 18:22:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:29.037 /dev/nbd1 00:24:29.295 18:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:29.295 18:22:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:29.295 18:22:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:29.295 1+0 records in 00:24:29.295 1+0 records out 00:24:29.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404127 s, 10.1 MB/s 00:24:29.295 18:23:00 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:29.295 18:23:00 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:24:29.295 18:23:00 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:29.295 18:23:00 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:29.295 18:23:00 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:24:29.295 18:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:29.295 18:23:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:29.295 18:23:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:29.295 18:23:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:29.295 18:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:29.295 18:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:29.295 { 00:24:29.295 "nbd_device": "/dev/nbd0", 00:24:29.295 "bdev_name": "Malloc0" 00:24:29.295 }, 00:24:29.295 { 00:24:29.295 "nbd_device": "/dev/nbd1", 00:24:29.295 "bdev_name": "Malloc1" 00:24:29.295 } 00:24:29.295 ]' 00:24:29.295 18:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:29.295 18:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:29.295 { 00:24:29.295 "nbd_device": "/dev/nbd0", 00:24:29.295 "bdev_name": "Malloc0" 00:24:29.295 }, 00:24:29.295 { 00:24:29.295 "nbd_device": "/dev/nbd1", 00:24:29.295 "bdev_name": "Malloc1" 00:24:29.295 } 00:24:29.295 ]' 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:29.553 /dev/nbd1' 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:29.553 /dev/nbd1' 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:29.553 256+0 records in 00:24:29.553 256+0 records out 00:24:29.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516006 s, 203 MB/s 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:29.553 256+0 records in 00:24:29.553 256+0 records out 00:24:29.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0259061 s, 40.5 MB/s 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:29.553 256+0 records in 00:24:29.553 256+0 records out 00:24:29.553 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0340515 s, 30.8 MB/s 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.553 18:23:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.810 18:23:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:30.067 18:23:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:30.326 18:23:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:24:30.326 18:23:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:30.891 18:23:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:24:31.852 [2024-12-06 18:23:02.730365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:32.111 [2024-12-06 18:23:02.844488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.111 [2024-12-06 18:23:02.844508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.111 [2024-12-06 18:23:03.043820] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:32.111 [2024-12-06 18:23:03.043908] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:24:34.008 spdk_app_start Round 2 00:24:34.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:34.008 18:23:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:24:34.008 18:23:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:24:34.008 18:23:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58111 /var/tmp/spdk-nbd.sock 00:24:34.008 18:23:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58111 ']' 00:24:34.008 18:23:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:34.008 18:23:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.008 18:23:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:34.008 18:23:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.009 18:23:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:34.009 18:23:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.009 18:23:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:24:34.009 18:23:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:34.267 Malloc0 00:24:34.267 18:23:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:24:34.524 Malloc1 00:24:34.524 18:23:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:34.524 18:23:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:24:34.782 /dev/nbd0 00:24:34.782 18:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:34.782 18:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:34.782 1+0 records in 00:24:34.782 1+0 records out 00:24:34.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233284 s, 17.6 MB/s 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:34.782 18:23:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:24:34.782 18:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:34.782 18:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:34.782 18:23:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:24:35.040 /dev/nbd1 00:24:35.040 18:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:35.040 18:23:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:24:35.040 1+0 records in 00:24:35.040 1+0 records out 00:24:35.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044215 s, 9.3 MB/s 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:35.040 18:23:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:24:35.040 18:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:35.040 18:23:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:24:35.040 18:23:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:35.040 18:23:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.040 18:23:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:35.297 { 00:24:35.297 "nbd_device": "/dev/nbd0", 00:24:35.297 "bdev_name": "Malloc0" 00:24:35.297 }, 00:24:35.297 { 00:24:35.297 "nbd_device": "/dev/nbd1", 00:24:35.297 "bdev_name": "Malloc1" 00:24:35.297 } 00:24:35.297 ]' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:35.297 { 00:24:35.297 "nbd_device": "/dev/nbd0", 00:24:35.297 "bdev_name": "Malloc0" 00:24:35.297 }, 00:24:35.297 { 00:24:35.297 "nbd_device": "/dev/nbd1", 00:24:35.297 "bdev_name": "Malloc1" 00:24:35.297 } 00:24:35.297 ]' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:35.297 /dev/nbd1' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:35.297 /dev/nbd1' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:24:35.297 256+0 records in 00:24:35.297 256+0 records out 00:24:35.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00520743 s, 201 MB/s 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:35.297 256+0 records in 00:24:35.297 256+0 records out 00:24:35.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308754 s, 34.0 MB/s 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:35.297 256+0 records in 00:24:35.297 256+0 records out 00:24:35.297 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0322263 s, 32.5 MB/s 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:35.297 18:23:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:35.555 18:23:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.813 18:23:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:36.071 18:23:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:36.071 18:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:36.071 18:23:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:36.071 18:23:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:36.071 18:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:24:36.071 18:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:36.329 18:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:24:36.329 18:23:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:24:36.329 18:23:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:24:36.329 18:23:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:24:36.329 18:23:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:36.329 18:23:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:24:36.329 18:23:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:24:36.587 18:23:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:24:37.962 [2024-12-06 18:23:08.628898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:37.962 [2024-12-06 18:23:08.741509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.962 [2024-12-06 18:23:08.741510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.242 [2024-12-06 18:23:08.938592] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:24:38.242 [2024-12-06 18:23:08.938680] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:24:39.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:39.617 18:23:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58111 /var/tmp/spdk-nbd.sock 00:24:39.617 18:23:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58111 ']' 00:24:39.617 18:23:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:39.617 18:23:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.617 18:23:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:39.617 18:23:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.617 18:23:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:24:39.876 18:23:10 event.app_repeat -- event/event.sh@39 -- # killprocess 58111 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58111 ']' 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58111 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58111 00:24:39.876 killing process with pid 58111 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58111' 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58111 00:24:39.876 18:23:10 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58111 00:24:40.813 spdk_app_start is called in Round 0. 00:24:40.813 Shutdown signal received, stop current app iteration 00:24:40.813 Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization... 00:24:40.813 spdk_app_start is called in Round 1. 00:24:40.813 Shutdown signal received, stop current app iteration 00:24:40.813 Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization... 00:24:40.813 spdk_app_start is called in Round 2. 00:24:40.813 Shutdown signal received, stop current app iteration 00:24:40.813 Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 reinitialization... 00:24:40.813 spdk_app_start is called in Round 3. 00:24:40.813 Shutdown signal received, stop current app iteration 00:24:41.073 18:23:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:24:41.073 18:23:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:24:41.073 00:24:41.073 real 0m19.815s 00:24:41.073 user 0m42.441s 00:24:41.073 sys 0m3.076s 00:24:41.073 ************************************ 00:24:41.073 END TEST app_repeat 00:24:41.073 ************************************ 00:24:41.073 18:23:11 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.073 18:23:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:24:41.073 18:23:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:24:41.073 18:23:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:24:41.073 18:23:11 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:41.073 18:23:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.073 18:23:11 event -- common/autotest_common.sh@10 -- # set +x 00:24:41.073 ************************************ 00:24:41.073 START TEST cpu_locks 00:24:41.073 ************************************ 00:24:41.073 18:23:11 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:24:41.073 * Looking for test storage... 00:24:41.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:24:41.073 18:23:11 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:41.073 18:23:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:24:41.073 18:23:11 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.333 18:23:12 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.333 --rc genhtml_branch_coverage=1 00:24:41.333 --rc genhtml_function_coverage=1 00:24:41.333 --rc genhtml_legend=1 00:24:41.333 --rc geninfo_all_blocks=1 00:24:41.333 --rc geninfo_unexecuted_blocks=1 00:24:41.333 00:24:41.333 ' 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.333 --rc genhtml_branch_coverage=1 00:24:41.333 --rc genhtml_function_coverage=1 00:24:41.333 --rc genhtml_legend=1 00:24:41.333 --rc geninfo_all_blocks=1 00:24:41.333 --rc geninfo_unexecuted_blocks=1 00:24:41.333 00:24:41.333 ' 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.333 --rc genhtml_branch_coverage=1 00:24:41.333 --rc genhtml_function_coverage=1 00:24:41.333 --rc genhtml_legend=1 00:24:41.333 --rc geninfo_all_blocks=1 00:24:41.333 --rc geninfo_unexecuted_blocks=1 00:24:41.333 00:24:41.333 ' 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:41.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.333 --rc genhtml_branch_coverage=1 00:24:41.333 --rc genhtml_function_coverage=1 00:24:41.333 --rc genhtml_legend=1 00:24:41.333 --rc geninfo_all_blocks=1 00:24:41.333 --rc geninfo_unexecuted_blocks=1 00:24:41.333 00:24:41.333 ' 00:24:41.333 18:23:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:24:41.333 18:23:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:24:41.333 18:23:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:24:41.333 18:23:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.333 18:23:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:41.333 ************************************ 00:24:41.333 START TEST default_locks 00:24:41.333 ************************************ 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58558 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58558 00:24:41.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58558 ']' 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.333 18:23:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:24:41.333 [2024-12-06 18:23:12.202877] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:41.333 [2024-12-06 18:23:12.203245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58558 ] 00:24:41.593 [2024-12-06 18:23:12.389684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.593 [2024-12-06 18:23:12.506382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.531 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.531 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:24:42.531 18:23:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58558 00:24:42.531 18:23:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58558 00:24:42.531 18:23:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58558 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58558 ']' 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58558 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58558 00:24:43.099 killing process with pid 58558 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58558' 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58558 00:24:43.099 18:23:13 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58558 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58558 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58558 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58558 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58558 ']' 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:24:45.634 ERROR: process (pid: 58558) is no longer running 00:24:45.634 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58558) - No such process 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:45.634 ************************************ 00:24:45.634 END TEST default_locks 00:24:45.634 ************************************ 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:24:45.634 00:24:45.634 real 0m4.270s 00:24:45.634 user 0m4.184s 00:24:45.634 sys 0m0.724s 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:45.634 18:23:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:24:45.634 18:23:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:24:45.634 18:23:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:45.634 18:23:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:45.634 18:23:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:45.634 ************************************ 00:24:45.634 START TEST default_locks_via_rpc 00:24:45.634 ************************************ 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58643 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58643 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58643 ']' 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.634 18:23:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:45.634 [2024-12-06 18:23:16.534639] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:45.634 [2024-12-06 18:23:16.534768] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58643 ] 00:24:45.892 [2024-12-06 18:23:16.716336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.892 [2024-12-06 18:23:16.833527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58643 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58643 00:24:46.825 18:23:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:47.390 18:23:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58643 00:24:47.390 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58643 ']' 00:24:47.390 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58643 00:24:47.391 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:24:47.391 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.391 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58643 00:24:47.391 killing process with pid 58643 00:24:47.391 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:47.391 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:47.391 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58643' 00:24:47.391 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58643 00:24:47.391 18:23:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58643 00:24:49.920 00:24:49.920 real 0m4.244s 00:24:49.920 user 0m4.200s 00:24:49.920 sys 0m0.680s 00:24:49.920 18:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:49.920 ************************************ 00:24:49.920 END TEST default_locks_via_rpc 00:24:49.920 ************************************ 00:24:49.920 18:23:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:24:49.920 18:23:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:24:49.921 18:23:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:49.921 18:23:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:49.921 18:23:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:24:49.921 ************************************ 00:24:49.921 START TEST non_locking_app_on_locked_coremask 00:24:49.921 ************************************ 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58718 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58718 /var/tmp/spdk.sock 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58718 ']' 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:49.921 18:23:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:49.921 [2024-12-06 18:23:20.850332] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:49.921 [2024-12-06 18:23:20.850458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58718 ] 00:24:50.180 [2024-12-06 18:23:21.034447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.439 [2024-12-06 18:23:21.147917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58734 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58734 /var/tmp/spdk2.sock 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58734 ']' 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:51.374 18:23:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:24:51.374 [2024-12-06 18:23:22.121433] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:24:51.374 [2024-12-06 18:23:22.121819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58734 ] 00:24:51.374 [2024-12-06 18:23:22.303105] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:24:51.374 [2024-12-06 18:23:22.303172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:51.633 [2024-12-06 18:23:22.541279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.168 18:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.168 18:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:24:54.168 18:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58718 00:24:54.168 18:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58718 00:24:54.168 18:23:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58718 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58718 ']' 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58718 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58718 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.103 killing process with pid 58718 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58718' 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58718 00:24:55.103 18:23:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58718 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58734 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58734 ']' 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58734 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58734 00:25:00.392 killing process with pid 58734 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58734' 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58734 00:25:00.392 18:23:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58734 00:25:02.296 00:25:02.296 real 0m12.250s 00:25:02.296 user 0m12.551s 00:25:02.296 sys 0m1.477s 00:25:02.296 18:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:02.296 ************************************ 00:25:02.296 END TEST non_locking_app_on_locked_coremask 00:25:02.296 ************************************ 00:25:02.296 18:23:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:02.296 18:23:33 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:25:02.296 18:23:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:02.296 18:23:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:02.296 18:23:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:02.296 ************************************ 00:25:02.296 START TEST locking_app_on_unlocked_coremask 00:25:02.296 ************************************ 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58891 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58891 /var/tmp/spdk.sock 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58891 ']' 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.296 18:23:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:02.296 [2024-12-06 18:23:33.175550] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:02.296 [2024-12-06 18:23:33.175681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58891 ] 00:25:02.554 [2024-12-06 18:23:33.353648] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:02.554 [2024-12-06 18:23:33.353698] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.554 [2024-12-06 18:23:33.473107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58907 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58907 /var/tmp/spdk2.sock 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58907 ']' 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:03.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:03.492 18:23:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:03.492 [2024-12-06 18:23:34.422089] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:03.492 [2024-12-06 18:23:34.422409] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58907 ] 00:25:03.751 [2024-12-06 18:23:34.604658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.010 [2024-12-06 18:23:34.829083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.556 18:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.556 18:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:25:06.556 18:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58907 00:25:06.556 18:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58907 00:25:06.556 18:23:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58891 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58891 ']' 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58891 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58891 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:07.123 killing process with pid 58891 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58891' 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58891 00:25:07.123 18:23:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58891 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58907 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58907 ']' 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58907 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58907 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:12.392 killing process with pid 58907 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58907' 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58907 00:25:12.392 18:23:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58907 00:25:14.318 00:25:14.318 real 0m12.059s 00:25:14.318 user 0m12.301s 00:25:14.318 sys 0m1.446s 00:25:14.318 18:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:14.319 ************************************ 00:25:14.319 END TEST locking_app_on_unlocked_coremask 00:25:14.319 ************************************ 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:14.319 18:23:45 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:25:14.319 18:23:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:14.319 18:23:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:14.319 18:23:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:14.319 ************************************ 00:25:14.319 START TEST locking_app_on_locked_coremask 00:25:14.319 ************************************ 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59067 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59067 /var/tmp/spdk.sock 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59067 ']' 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.319 18:23:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:14.583 [2024-12-06 18:23:45.301763] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:14.584 [2024-12-06 18:23:45.301885] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59067 ] 00:25:14.584 [2024-12-06 18:23:45.483538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.841 [2024-12-06 18:23:45.602308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.776 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59084 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59084 /var/tmp/spdk2.sock 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59084 /var/tmp/spdk2.sock 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59084 /var/tmp/spdk2.sock 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59084 ']' 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:15.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:15.777 18:23:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:15.777 [2024-12-06 18:23:46.638327] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:15.777 [2024-12-06 18:23:46.638687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59084 ] 00:25:16.036 [2024-12-06 18:23:46.834352] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59067 has claimed it. 00:25:16.036 [2024-12-06 18:23:46.834420] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:25:16.603 ERROR: process (pid: 59084) is no longer running 00:25:16.603 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59084) - No such process 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59067 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59067 00:25:16.603 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59067 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59067 ']' 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59067 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59067 00:25:16.863 killing process with pid 59067 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59067' 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59067 00:25:16.863 18:23:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59067 00:25:19.400 00:25:19.400 real 0m4.971s 00:25:19.400 user 0m5.174s 00:25:19.400 sys 0m0.849s 00:25:19.400 18:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.400 18:23:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:19.400 ************************************ 00:25:19.400 END TEST locking_app_on_locked_coremask 00:25:19.400 ************************************ 00:25:19.400 18:23:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:25:19.400 18:23:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:19.400 18:23:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.400 18:23:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:19.400 ************************************ 00:25:19.400 START TEST locking_overlapped_coremask 00:25:19.400 ************************************ 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59148 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59148 /var/tmp/spdk.sock 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59148 ']' 00:25:19.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:19.400 18:23:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:19.661 [2024-12-06 18:23:50.350311] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:19.661 [2024-12-06 18:23:50.350589] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59148 ] 00:25:19.661 [2024-12-06 18:23:50.525945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:19.920 [2024-12-06 18:23:50.640606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:19.920 [2024-12-06 18:23:50.640742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:19.920 [2024-12-06 18:23:50.640778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59177 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59177 /var/tmp/spdk2.sock 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59177 /var/tmp/spdk2.sock 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59177 /var/tmp/spdk2.sock 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59177 ']' 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:20.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.881 18:23:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:20.881 [2024-12-06 18:23:51.624500] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:20.881 [2024-12-06 18:23:51.624627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59177 ] 00:25:20.881 [2024-12-06 18:23:51.809513] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59148 has claimed it. 00:25:20.881 [2024-12-06 18:23:51.809592] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:25:21.449 ERROR: process (pid: 59177) is no longer running 00:25:21.449 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59177) - No such process 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59148 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59148 ']' 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59148 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59148 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59148' 00:25:21.449 killing process with pid 59148 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59148 00:25:21.449 18:23:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59148 00:25:23.985 ************************************ 00:25:23.985 END TEST locking_overlapped_coremask 00:25:23.985 ************************************ 00:25:23.985 00:25:23.985 real 0m4.512s 00:25:23.985 user 0m12.196s 00:25:23.985 sys 0m0.680s 00:25:23.985 18:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:25:23.986 18:23:54 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:25:23.986 18:23:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:23.986 18:23:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:23.986 18:23:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:23.986 ************************************ 00:25:23.986 START TEST locking_overlapped_coremask_via_rpc 00:25:23.986 ************************************ 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59241 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59241 /var/tmp/spdk.sock 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59241 ']' 00:25:23.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.986 18:23:54 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:24.246 [2024-12-06 18:23:54.940716] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:24.246 [2024-12-06 18:23:54.940844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59241 ] 00:25:24.246 [2024-12-06 18:23:55.117643] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:24.246 [2024-12-06 18:23:55.117706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:24.508 [2024-12-06 18:23:55.238957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.508 [2024-12-06 18:23:55.239116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.508 [2024-12-06 18:23:55.239183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.445 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.445 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:25.445 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:25:25.445 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59259 00:25:25.446 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59259 /var/tmp/spdk2.sock 00:25:25.446 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59259 ']' 00:25:25.446 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:25.446 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.446 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:25.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:25.446 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.446 18:23:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:25.446 [2024-12-06 18:23:56.157989] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:25.446 [2024-12-06 18:23:56.158124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59259 ] 00:25:25.446 [2024-12-06 18:23:56.340797] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:25:25.446 [2024-12-06 18:23:56.340870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:25.704 [2024-12-06 18:23:56.642365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:25.704 [2024-12-06 18:23:56.646367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:25.704 [2024-12-06 18:23:56.646402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:28.359 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.360 [2024-12-06 18:23:58.806347] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59241 has claimed it. 00:25:28.360 request: 00:25:28.360 { 00:25:28.360 "method": "framework_enable_cpumask_locks", 00:25:28.360 "req_id": 1 00:25:28.360 } 00:25:28.360 Got JSON-RPC error response 00:25:28.360 response: 00:25:28.360 { 00:25:28.360 "code": -32603, 00:25:28.360 "message": "Failed to claim CPU core: 2" 00:25:28.360 } 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59241 /var/tmp/spdk.sock 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59241 ']' 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:28.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.360 18:23:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59259 /var/tmp/spdk2.sock 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59259 ']' 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:25:28.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:28.360 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.618 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:28.618 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:25:28.618 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:25:28.618 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:25:28.618 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:25:28.618 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:25:28.618 00:25:28.618 real 0m4.480s 00:25:28.618 user 0m1.301s 00:25:28.618 sys 0m0.251s 00:25:28.618 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:28.618 18:23:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:25:28.618 ************************************ 00:25:28.618 END TEST locking_overlapped_coremask_via_rpc 00:25:28.618 ************************************ 00:25:28.618 18:23:59 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:25:28.618 18:23:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59241 ]] 00:25:28.618 18:23:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59241 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59241 ']' 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59241 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59241 00:25:28.618 killing process with pid 59241 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59241' 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59241 00:25:28.618 18:23:59 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59241 00:25:31.148 18:24:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59259 ]] 00:25:31.148 18:24:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59259 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59259 ']' 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59259 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59259 00:25:31.148 killing process with pid 59259 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59259' 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59259 00:25:31.148 18:24:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59259 00:25:33.685 18:24:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:25:33.685 18:24:04 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:25:33.685 18:24:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59241 ]] 00:25:33.685 18:24:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59241 00:25:33.685 18:24:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59241 ']' 00:25:33.685 18:24:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59241 00:25:33.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59241) - No such process 00:25:33.685 Process with pid 59241 is not found 00:25:33.685 18:24:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59241 is not found' 00:25:33.685 18:24:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59259 ]] 00:25:33.685 18:24:04 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59259 00:25:33.685 18:24:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59259 ']' 00:25:33.685 18:24:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59259 00:25:33.685 Process with pid 59259 is not found 00:25:33.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59259) - No such process 00:25:33.685 18:24:04 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59259 is not found' 00:25:33.685 18:24:04 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:25:33.685 ************************************ 00:25:33.685 END TEST cpu_locks 00:25:33.685 ************************************ 00:25:33.685 00:25:33.685 real 0m52.604s 00:25:33.685 user 1m28.734s 00:25:33.685 sys 0m7.593s 00:25:33.685 18:24:04 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:33.685 18:24:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:25:33.685 ************************************ 00:25:33.685 END TEST event 00:25:33.685 ************************************ 00:25:33.685 00:25:33.685 real 1m23.654s 00:25:33.685 user 2m28.323s 00:25:33.685 sys 0m12.054s 00:25:33.685 18:24:04 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:33.685 18:24:04 event -- common/autotest_common.sh@10 -- # set +x 00:25:33.685 18:24:04 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:25:33.685 18:24:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:33.685 18:24:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.685 18:24:04 -- common/autotest_common.sh@10 -- # set +x 00:25:33.685 ************************************ 00:25:33.685 START TEST thread 00:25:33.685 ************************************ 00:25:33.685 18:24:04 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:25:33.945 * Looking for test storage... 00:25:33.945 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:33.945 18:24:04 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:33.945 18:24:04 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:33.945 18:24:04 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:33.945 18:24:04 thread -- scripts/common.sh@336 -- # IFS=.-: 00:25:33.945 18:24:04 thread -- scripts/common.sh@336 -- # read -ra ver1 00:25:33.945 18:24:04 thread -- scripts/common.sh@337 -- # IFS=.-: 00:25:33.945 18:24:04 thread -- scripts/common.sh@337 -- # read -ra ver2 00:25:33.945 18:24:04 thread -- scripts/common.sh@338 -- # local 'op=<' 00:25:33.945 18:24:04 thread -- scripts/common.sh@340 -- # ver1_l=2 00:25:33.945 18:24:04 thread -- scripts/common.sh@341 -- # ver2_l=1 00:25:33.945 18:24:04 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:33.945 18:24:04 thread -- scripts/common.sh@344 -- # case "$op" in 00:25:33.945 18:24:04 thread -- scripts/common.sh@345 -- # : 1 00:25:33.945 18:24:04 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:33.945 18:24:04 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:33.945 18:24:04 thread -- scripts/common.sh@365 -- # decimal 1 00:25:33.945 18:24:04 thread -- scripts/common.sh@353 -- # local d=1 00:25:33.945 18:24:04 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:33.945 18:24:04 thread -- scripts/common.sh@355 -- # echo 1 00:25:33.945 18:24:04 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:25:33.945 18:24:04 thread -- scripts/common.sh@366 -- # decimal 2 00:25:33.945 18:24:04 thread -- scripts/common.sh@353 -- # local d=2 00:25:33.945 18:24:04 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:33.945 18:24:04 thread -- scripts/common.sh@355 -- # echo 2 00:25:33.945 18:24:04 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:25:33.945 18:24:04 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:33.945 18:24:04 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:33.945 18:24:04 thread -- scripts/common.sh@368 -- # return 0 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.945 --rc genhtml_branch_coverage=1 00:25:33.945 --rc genhtml_function_coverage=1 00:25:33.945 --rc genhtml_legend=1 00:25:33.945 --rc geninfo_all_blocks=1 00:25:33.945 --rc geninfo_unexecuted_blocks=1 00:25:33.945 00:25:33.945 ' 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.945 --rc genhtml_branch_coverage=1 00:25:33.945 --rc genhtml_function_coverage=1 00:25:33.945 --rc genhtml_legend=1 00:25:33.945 --rc geninfo_all_blocks=1 00:25:33.945 --rc geninfo_unexecuted_blocks=1 00:25:33.945 00:25:33.945 ' 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.945 --rc genhtml_branch_coverage=1 00:25:33.945 --rc genhtml_function_coverage=1 00:25:33.945 --rc genhtml_legend=1 00:25:33.945 --rc geninfo_all_blocks=1 00:25:33.945 --rc geninfo_unexecuted_blocks=1 00:25:33.945 00:25:33.945 ' 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:33.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:33.945 --rc genhtml_branch_coverage=1 00:25:33.945 --rc genhtml_function_coverage=1 00:25:33.945 --rc genhtml_legend=1 00:25:33.945 --rc geninfo_all_blocks=1 00:25:33.945 --rc geninfo_unexecuted_blocks=1 00:25:33.945 00:25:33.945 ' 00:25:33.945 18:24:04 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:33.945 18:24:04 thread -- common/autotest_common.sh@10 -- # set +x 00:25:33.945 ************************************ 00:25:33.945 START TEST thread_poller_perf 00:25:33.945 ************************************ 00:25:33.945 18:24:04 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:25:33.946 [2024-12-06 18:24:04.848903] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:33.946 [2024-12-06 18:24:04.849015] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59460 ] 00:25:34.205 [2024-12-06 18:24:05.028977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.205 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:25:34.205 [2024-12-06 18:24:05.140843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:35.606 [2024-12-06T18:24:06.555Z] ====================================== 00:25:35.606 [2024-12-06T18:24:06.555Z] busy:2507666680 (cyc) 00:25:35.606 [2024-12-06T18:24:06.555Z] total_run_count: 376000 00:25:35.606 [2024-12-06T18:24:06.555Z] tsc_hz: 2490000000 (cyc) 00:25:35.606 [2024-12-06T18:24:06.555Z] ====================================== 00:25:35.606 [2024-12-06T18:24:06.555Z] poller_cost: 6669 (cyc), 2678 (nsec) 00:25:35.606 ************************************ 00:25:35.606 END TEST thread_poller_perf 00:25:35.606 ************************************ 00:25:35.606 00:25:35.606 real 0m1.592s 00:25:35.606 user 0m1.394s 00:25:35.606 sys 0m0.088s 00:25:35.606 18:24:06 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:35.606 18:24:06 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:25:35.606 18:24:06 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:25:35.606 18:24:06 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:25:35.606 18:24:06 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:35.606 18:24:06 thread -- common/autotest_common.sh@10 -- # set +x 00:25:35.606 ************************************ 00:25:35.606 START TEST thread_poller_perf 00:25:35.606 ************************************ 00:25:35.606 18:24:06 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:25:35.606 [2024-12-06 18:24:06.511380] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:35.606 [2024-12-06 18:24:06.511504] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59496 ] 00:25:35.866 [2024-12-06 18:24:06.693866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.866 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:25:35.866 [2024-12-06 18:24:06.810567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.281 [2024-12-06T18:24:08.230Z] ====================================== 00:25:37.281 [2024-12-06T18:24:08.230Z] busy:2493771110 (cyc) 00:25:37.281 [2024-12-06T18:24:08.230Z] total_run_count: 4704000 00:25:37.281 [2024-12-06T18:24:08.230Z] tsc_hz: 2490000000 (cyc) 00:25:37.281 [2024-12-06T18:24:08.230Z] ====================================== 00:25:37.281 [2024-12-06T18:24:08.230Z] poller_cost: 530 (cyc), 212 (nsec) 00:25:37.281 00:25:37.281 real 0m1.580s 00:25:37.281 user 0m1.366s 00:25:37.281 sys 0m0.105s 00:25:37.281 18:24:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.281 ************************************ 00:25:37.281 END TEST thread_poller_perf 00:25:37.281 ************************************ 00:25:37.281 18:24:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:25:37.281 18:24:08 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:25:37.281 ************************************ 00:25:37.281 END TEST thread 00:25:37.281 ************************************ 00:25:37.281 00:25:37.281 real 0m3.524s 00:25:37.281 user 0m2.920s 00:25:37.281 sys 0m0.393s 00:25:37.281 18:24:08 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.282 18:24:08 thread -- common/autotest_common.sh@10 -- # set +x 00:25:37.282 18:24:08 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:25:37.282 18:24:08 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:25:37.282 18:24:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:37.282 18:24:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:37.282 18:24:08 -- common/autotest_common.sh@10 -- # set +x 00:25:37.282 ************************************ 00:25:37.282 START TEST app_cmdline 00:25:37.282 ************************************ 00:25:37.282 18:24:08 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:25:37.559 * Looking for test storage... 00:25:37.559 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:25:37.559 18:24:08 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:37.559 18:24:08 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:37.559 18:24:08 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:25:37.559 18:24:08 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@345 -- # : 1 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:37.559 18:24:08 app_cmdline -- scripts/common.sh@368 -- # return 0 00:25:37.559 18:24:08 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:37.559 18:24:08 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:37.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.560 --rc genhtml_branch_coverage=1 00:25:37.560 --rc genhtml_function_coverage=1 00:25:37.560 --rc genhtml_legend=1 00:25:37.560 --rc geninfo_all_blocks=1 00:25:37.560 --rc geninfo_unexecuted_blocks=1 00:25:37.560 00:25:37.560 ' 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.560 --rc genhtml_branch_coverage=1 00:25:37.560 --rc genhtml_function_coverage=1 00:25:37.560 --rc genhtml_legend=1 00:25:37.560 --rc geninfo_all_blocks=1 00:25:37.560 --rc geninfo_unexecuted_blocks=1 00:25:37.560 00:25:37.560 ' 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.560 --rc genhtml_branch_coverage=1 00:25:37.560 --rc genhtml_function_coverage=1 00:25:37.560 --rc genhtml_legend=1 00:25:37.560 --rc geninfo_all_blocks=1 00:25:37.560 --rc geninfo_unexecuted_blocks=1 00:25:37.560 00:25:37.560 ' 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:37.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:37.560 --rc genhtml_branch_coverage=1 00:25:37.560 --rc genhtml_function_coverage=1 00:25:37.560 --rc genhtml_legend=1 00:25:37.560 --rc geninfo_all_blocks=1 00:25:37.560 --rc geninfo_unexecuted_blocks=1 00:25:37.560 00:25:37.560 ' 00:25:37.560 18:24:08 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:25:37.560 18:24:08 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59580 00:25:37.560 18:24:08 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:25:37.560 18:24:08 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59580 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59580 ']' 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.560 18:24:08 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:37.819 [2024-12-06 18:24:08.531596] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:37.819 [2024-12-06 18:24:08.531960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:25:37.819 [2024-12-06 18:24:08.722220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.077 [2024-12-06 18:24:08.835207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.027 18:24:09 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:39.027 18:24:09 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:25:39.027 18:24:09 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:25:39.027 { 00:25:39.027 "version": "SPDK v25.01-pre git sha1 b6a18b192", 00:25:39.027 "fields": { 00:25:39.027 "major": 25, 00:25:39.027 "minor": 1, 00:25:39.027 "patch": 0, 00:25:39.027 "suffix": "-pre", 00:25:39.027 "commit": "b6a18b192" 00:25:39.027 } 00:25:39.027 } 00:25:39.027 18:24:09 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:25:39.027 18:24:09 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:25:39.027 18:24:09 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:25:39.027 18:24:09 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:25:39.027 18:24:09 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:25:39.027 18:24:09 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:25:39.027 18:24:09 app_cmdline -- app/cmdline.sh@26 -- # sort 00:25:39.027 18:24:09 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.027 18:24:09 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:39.027 18:24:09 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.286 18:24:09 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:25:39.286 18:24:09 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:25:39.286 18:24:09 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:25:39.286 18:24:09 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:25:39.286 request: 00:25:39.286 { 00:25:39.286 "method": "env_dpdk_get_mem_stats", 00:25:39.286 "req_id": 1 00:25:39.286 } 00:25:39.286 Got JSON-RPC error response 00:25:39.286 response: 00:25:39.286 { 00:25:39.286 "code": -32601, 00:25:39.286 "message": "Method not found" 00:25:39.286 } 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.286 18:24:10 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59580 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59580 ']' 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59580 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:39.286 18:24:10 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59580 00:25:39.544 killing process with pid 59580 00:25:39.544 18:24:10 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:39.544 18:24:10 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:39.544 18:24:10 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59580' 00:25:39.544 18:24:10 app_cmdline -- common/autotest_common.sh@973 -- # kill 59580 00:25:39.544 18:24:10 app_cmdline -- common/autotest_common.sh@978 -- # wait 59580 00:25:42.083 ************************************ 00:25:42.083 END TEST app_cmdline 00:25:42.083 ************************************ 00:25:42.083 00:25:42.083 real 0m4.509s 00:25:42.083 user 0m4.791s 00:25:42.083 sys 0m0.660s 00:25:42.083 18:24:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.083 18:24:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:25:42.083 18:24:12 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:25:42.083 18:24:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:42.083 18:24:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.083 18:24:12 -- common/autotest_common.sh@10 -- # set +x 00:25:42.083 ************************************ 00:25:42.083 START TEST version 00:25:42.083 ************************************ 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:25:42.083 * Looking for test storage... 00:25:42.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1711 -- # lcov --version 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:42.083 18:24:12 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.083 18:24:12 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.083 18:24:12 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.083 18:24:12 version -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.083 18:24:12 version -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.083 18:24:12 version -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.083 18:24:12 version -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.083 18:24:12 version -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.083 18:24:12 version -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.083 18:24:12 version -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.083 18:24:12 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.083 18:24:12 version -- scripts/common.sh@344 -- # case "$op" in 00:25:42.083 18:24:12 version -- scripts/common.sh@345 -- # : 1 00:25:42.083 18:24:12 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.083 18:24:12 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.083 18:24:12 version -- scripts/common.sh@365 -- # decimal 1 00:25:42.083 18:24:12 version -- scripts/common.sh@353 -- # local d=1 00:25:42.083 18:24:12 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.083 18:24:12 version -- scripts/common.sh@355 -- # echo 1 00:25:42.083 18:24:12 version -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.083 18:24:12 version -- scripts/common.sh@366 -- # decimal 2 00:25:42.083 18:24:12 version -- scripts/common.sh@353 -- # local d=2 00:25:42.083 18:24:12 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.083 18:24:12 version -- scripts/common.sh@355 -- # echo 2 00:25:42.083 18:24:12 version -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.083 18:24:12 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.083 18:24:12 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.083 18:24:12 version -- scripts/common.sh@368 -- # return 0 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.083 --rc genhtml_branch_coverage=1 00:25:42.083 --rc genhtml_function_coverage=1 00:25:42.083 --rc genhtml_legend=1 00:25:42.083 --rc geninfo_all_blocks=1 00:25:42.083 --rc geninfo_unexecuted_blocks=1 00:25:42.083 00:25:42.083 ' 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.083 --rc genhtml_branch_coverage=1 00:25:42.083 --rc genhtml_function_coverage=1 00:25:42.083 --rc genhtml_legend=1 00:25:42.083 --rc geninfo_all_blocks=1 00:25:42.083 --rc geninfo_unexecuted_blocks=1 00:25:42.083 00:25:42.083 ' 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.083 --rc genhtml_branch_coverage=1 00:25:42.083 --rc genhtml_function_coverage=1 00:25:42.083 --rc genhtml_legend=1 00:25:42.083 --rc geninfo_all_blocks=1 00:25:42.083 --rc geninfo_unexecuted_blocks=1 00:25:42.083 00:25:42.083 ' 00:25:42.083 18:24:12 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:42.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.083 --rc genhtml_branch_coverage=1 00:25:42.083 --rc genhtml_function_coverage=1 00:25:42.083 --rc genhtml_legend=1 00:25:42.083 --rc geninfo_all_blocks=1 00:25:42.083 --rc geninfo_unexecuted_blocks=1 00:25:42.083 00:25:42.083 ' 00:25:42.083 18:24:12 version -- app/version.sh@17 -- # get_header_version major 00:25:42.083 18:24:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:42.083 18:24:12 version -- app/version.sh@14 -- # cut -f2 00:25:42.083 18:24:12 version -- app/version.sh@14 -- # tr -d '"' 00:25:42.083 18:24:12 version -- app/version.sh@17 -- # major=25 00:25:42.083 18:24:12 version -- app/version.sh@18 -- # get_header_version minor 00:25:42.083 18:24:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:42.083 18:24:12 version -- app/version.sh@14 -- # cut -f2 00:25:42.083 18:24:12 version -- app/version.sh@14 -- # tr -d '"' 00:25:42.083 18:24:12 version -- app/version.sh@18 -- # minor=1 00:25:42.083 18:24:13 version -- app/version.sh@19 -- # get_header_version patch 00:25:42.083 18:24:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:42.083 18:24:13 version -- app/version.sh@14 -- # cut -f2 00:25:42.083 18:24:13 version -- app/version.sh@14 -- # tr -d '"' 00:25:42.083 18:24:13 version -- app/version.sh@19 -- # patch=0 00:25:42.083 18:24:13 version -- app/version.sh@20 -- # get_header_version suffix 00:25:42.083 18:24:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:25:42.083 18:24:13 version -- app/version.sh@14 -- # cut -f2 00:25:42.083 18:24:13 version -- app/version.sh@14 -- # tr -d '"' 00:25:42.083 18:24:13 version -- app/version.sh@20 -- # suffix=-pre 00:25:42.083 18:24:13 version -- app/version.sh@22 -- # version=25.1 00:25:42.083 18:24:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:25:42.083 18:24:13 version -- app/version.sh@28 -- # version=25.1rc0 00:25:42.083 18:24:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:25:42.083 18:24:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:25:42.343 18:24:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:25:42.343 18:24:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:25:42.343 ************************************ 00:25:42.343 END TEST version 00:25:42.343 ************************************ 00:25:42.343 00:25:42.343 real 0m0.326s 00:25:42.343 user 0m0.193s 00:25:42.343 sys 0m0.184s 00:25:42.343 18:24:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.343 18:24:13 version -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 18:24:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:25:42.343 18:24:13 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:25:42.343 18:24:13 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:25:42.343 18:24:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:42.343 18:24:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.343 18:24:13 -- common/autotest_common.sh@10 -- # set +x 00:25:42.343 ************************************ 00:25:42.343 START TEST bdev_raid 00:25:42.343 ************************************ 00:25:42.343 18:24:13 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:25:42.343 * Looking for test storage... 00:25:42.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:25:42.343 18:24:13 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:42.343 18:24:13 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:25:42.343 18:24:13 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@345 -- # : 1 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.604 18:24:13 bdev_raid -- scripts/common.sh@368 -- # return 0 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:42.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.604 --rc genhtml_branch_coverage=1 00:25:42.604 --rc genhtml_function_coverage=1 00:25:42.604 --rc genhtml_legend=1 00:25:42.604 --rc geninfo_all_blocks=1 00:25:42.604 --rc geninfo_unexecuted_blocks=1 00:25:42.604 00:25:42.604 ' 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:42.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.604 --rc genhtml_branch_coverage=1 00:25:42.604 --rc genhtml_function_coverage=1 00:25:42.604 --rc genhtml_legend=1 00:25:42.604 --rc geninfo_all_blocks=1 00:25:42.604 --rc geninfo_unexecuted_blocks=1 00:25:42.604 00:25:42.604 ' 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:42.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.604 --rc genhtml_branch_coverage=1 00:25:42.604 --rc genhtml_function_coverage=1 00:25:42.604 --rc genhtml_legend=1 00:25:42.604 --rc geninfo_all_blocks=1 00:25:42.604 --rc geninfo_unexecuted_blocks=1 00:25:42.604 00:25:42.604 ' 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:42.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.604 --rc genhtml_branch_coverage=1 00:25:42.604 --rc genhtml_function_coverage=1 00:25:42.604 --rc genhtml_legend=1 00:25:42.604 --rc geninfo_all_blocks=1 00:25:42.604 --rc geninfo_unexecuted_blocks=1 00:25:42.604 00:25:42.604 ' 00:25:42.604 18:24:13 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:25:42.604 18:24:13 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:25:42.604 18:24:13 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:25:42.604 18:24:13 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:25:42.604 18:24:13 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:25:42.604 18:24:13 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:25:42.604 18:24:13 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.604 18:24:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:42.604 ************************************ 00:25:42.604 START TEST raid1_resize_data_offset_test 00:25:42.604 ************************************ 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59773 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59773' 00:25:42.604 Process raid pid: 59773 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59773 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59773 ']' 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:42.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:42.604 18:24:13 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:42.604 [2024-12-06 18:24:13.488182] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:42.604 [2024-12-06 18:24:13.488323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.864 [2024-12-06 18:24:13.669037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.864 [2024-12-06 18:24:13.790210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:43.123 [2024-12-06 18:24:14.004723] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:43.123 [2024-12-06 18:24:14.004756] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:43.383 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:43.383 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:25:43.383 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:25:43.383 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.383 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.643 malloc0 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.643 malloc1 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.643 null0 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.643 [2024-12-06 18:24:14.512823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:25:43.643 [2024-12-06 18:24:14.514972] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:25:43.643 [2024-12-06 18:24:14.515027] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:25:43.643 [2024-12-06 18:24:14.515224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:43.643 [2024-12-06 18:24:14.515251] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:25:43.643 [2024-12-06 18:24:14.515528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:25:43.643 [2024-12-06 18:24:14.515688] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:43.643 [2024-12-06 18:24:14.515708] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:25:43.643 [2024-12-06 18:24:14.515862] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:43.643 [2024-12-06 18:24:14.568725] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.643 18:24:14 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.215 malloc2 00:25:44.215 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.215 18:24:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:25:44.215 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.215 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.215 [2024-12-06 18:24:15.160905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:25:44.477 [2024-12-06 18:24:15.179737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.477 [2024-12-06 18:24:15.181818] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59773 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59773 ']' 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59773 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59773 00:25:44.477 killing process with pid 59773 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59773' 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59773 00:25:44.477 [2024-12-06 18:24:15.275781] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:44.477 18:24:15 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59773 00:25:44.477 [2024-12-06 18:24:15.276855] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:25:44.477 [2024-12-06 18:24:15.276924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:44.477 [2024-12-06 18:24:15.276944] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:25:44.477 [2024-12-06 18:24:15.314861] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:44.477 [2024-12-06 18:24:15.315194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:44.477 [2024-12-06 18:24:15.315216] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:25:46.379 [2024-12-06 18:24:17.214080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:47.832 18:24:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:25:47.832 00:25:47.832 real 0m5.113s 00:25:47.832 user 0m4.966s 00:25:47.832 sys 0m0.604s 00:25:47.832 ************************************ 00:25:47.832 END TEST raid1_resize_data_offset_test 00:25:47.832 ************************************ 00:25:47.832 18:24:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.832 18:24:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.832 18:24:18 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:25:47.832 18:24:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.832 18:24:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.832 18:24:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:47.832 ************************************ 00:25:47.832 START TEST raid0_resize_superblock_test 00:25:47.832 ************************************ 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59865 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:47.832 Process raid pid: 59865 00:25:47.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59865' 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59865 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59865 ']' 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:47.832 18:24:18 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:47.832 [2024-12-06 18:24:18.676722] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:47.832 [2024-12-06 18:24:18.676852] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:48.089 [2024-12-06 18:24:18.844263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:48.089 [2024-12-06 18:24:18.988599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:48.348 [2024-12-06 18:24:19.233976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:48.348 [2024-12-06 18:24:19.234248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:48.607 18:24:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:48.607 18:24:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:25:48.607 18:24:19 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:25:48.607 18:24:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.607 18:24:19 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 malloc0 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 [2024-12-06 18:24:20.221695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:25:49.543 [2024-12-06 18:24:20.221996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.543 [2024-12-06 18:24:20.222037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:49.543 [2024-12-06 18:24:20.222055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.543 [2024-12-06 18:24:20.224955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.543 [2024-12-06 18:24:20.225133] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:25:49.543 pt0 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 5459a481-50dd-4658-a1ce-7fff09bf685b 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 197fd62a-c55e-406f-9e0a-b698bf14154c 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 ddfa210e-2afe-47bd-abcf-bc12d6e0c1b2 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 [2024-12-06 18:24:20.401911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 197fd62a-c55e-406f-9e0a-b698bf14154c is claimed 00:25:49.543 [2024-12-06 18:24:20.402024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ddfa210e-2afe-47bd-abcf-bc12d6e0c1b2 is claimed 00:25:49.543 [2024-12-06 18:24:20.402191] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:49.543 [2024-12-06 18:24:20.402215] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:25:49.543 [2024-12-06 18:24:20.402553] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:49.543 [2024-12-06 18:24:20.402744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:49.543 [2024-12-06 18:24:20.402763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:25:49.543 [2024-12-06 18:24:20.402928] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:25:49.543 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.803 [2024-12-06 18:24:20.510037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.803 [2024-12-06 18:24:20.553924] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:25:49.803 [2024-12-06 18:24:20.553958] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '197fd62a-c55e-406f-9e0a-b698bf14154c' was resized: old size 131072, new size 204800 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.803 [2024-12-06 18:24:20.565804] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:25:49.803 [2024-12-06 18:24:20.565833] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ddfa210e-2afe-47bd-abcf-bc12d6e0c1b2' was resized: old size 131072, new size 204800 00:25:49.803 [2024-12-06 18:24:20.565870] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.803 [2024-12-06 18:24:20.669845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.803 [2024-12-06 18:24:20.709544] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:25:49.803 [2024-12-06 18:24:20.709620] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:25:49.803 [2024-12-06 18:24:20.709638] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:49.803 [2024-12-06 18:24:20.709654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:25:49.803 [2024-12-06 18:24:20.709761] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:49.803 [2024-12-06 18:24:20.709811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:49.803 [2024-12-06 18:24:20.709826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.803 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.803 [2024-12-06 18:24:20.721461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:25:49.803 [2024-12-06 18:24:20.721518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:49.803 [2024-12-06 18:24:20.721540] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:49.803 [2024-12-06 18:24:20.721554] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:49.803 [2024-12-06 18:24:20.724062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:49.803 [2024-12-06 18:24:20.724108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:25:49.803 [2024-12-06 18:24:20.725852] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 197fd62a-c55e-406f-9e0a-b698bf14154c 00:25:49.803 [2024-12-06 18:24:20.725926] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 197fd62a-c55e-406f-9e0a-b698bf14154c is claimed 00:25:49.804 [2024-12-06 18:24:20.726037] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ddfa210e-2afe-47bd-abcf-bc12d6e0c1b2 00:25:49.804 [2024-12-06 18:24:20.726058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ddfa210e-2afe-47bd-abcf-bc12d6e0c1b2 is claimed 00:25:49.804 [2024-12-06 18:24:20.726231] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ddfa210e-2afe-47bd-abcf-bc12d6e0c1b2 (2) smaller than existing raid bdev Raid (3) 00:25:49.804 [2024-12-06 18:24:20.726260] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 197fd62a-c55e-406f-9e0a-b698bf14154c: File exists 00:25:49.804 [2024-12-06 18:24:20.726303] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:49.804 [2024-12-06 18:24:20.726334] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:25:49.804 [2024-12-06 18:24:20.726608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:49.804 pt0 00:25:49.804 [2024-12-06 18:24:20.726750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:49.804 [2024-12-06 18:24:20.726760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:25:49.804 [2024-12-06 18:24:20.726908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:49.804 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:49.804 [2024-12-06 18:24:20.746135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59865 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59865 ']' 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59865 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59865 00:25:50.063 killing process with pid 59865 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59865' 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59865 00:25:50.063 [2024-12-06 18:24:20.829707] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:50.063 [2024-12-06 18:24:20.829790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:50.063 [2024-12-06 18:24:20.829837] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:50.063 [2024-12-06 18:24:20.829848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:25:50.063 18:24:20 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59865 00:25:51.441 [2024-12-06 18:24:22.287104] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:52.819 ************************************ 00:25:52.819 END TEST raid0_resize_superblock_test 00:25:52.819 ************************************ 00:25:52.819 18:24:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:25:52.819 00:25:52.819 real 0m4.873s 00:25:52.819 user 0m4.882s 00:25:52.819 sys 0m0.827s 00:25:52.819 18:24:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:52.819 18:24:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.819 18:24:23 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:25:52.819 18:24:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:52.819 18:24:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:52.819 18:24:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:52.819 ************************************ 00:25:52.819 START TEST raid1_resize_superblock_test 00:25:52.819 ************************************ 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59969 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:52.819 Process raid pid: 59969 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59969' 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59969 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59969 ']' 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:52.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:52.819 18:24:23 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:52.819 [2024-12-06 18:24:23.626931] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:52.819 [2024-12-06 18:24:23.627618] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.079 [2024-12-06 18:24:23.808008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:53.079 [2024-12-06 18:24:23.921242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:53.338 [2024-12-06 18:24:24.128464] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:53.338 [2024-12-06 18:24:24.128503] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:53.598 18:24:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:53.598 18:24:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:25:53.598 18:24:24 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:25:53.598 18:24:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:53.598 18:24:24 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.167 malloc0 00:25:54.167 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.167 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:25:54.167 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.167 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.167 [2024-12-06 18:24:25.065105] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:25:54.167 [2024-12-06 18:24:25.065183] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.167 [2024-12-06 18:24:25.065208] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:25:54.167 [2024-12-06 18:24:25.065223] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.167 [2024-12-06 18:24:25.067629] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.167 [2024-12-06 18:24:25.067676] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:25:54.167 pt0 00:25:54.167 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.167 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:25:54.167 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.167 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.428 ca244dbd-5d9a-452c-b189-c2966186ae3b 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.428 4f69de6d-529a-46d4-938d-adc1762ff355 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.428 298f97c3-e937-4a1a-9589-c2f6b553c080 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.428 [2024-12-06 18:24:25.195301] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4f69de6d-529a-46d4-938d-adc1762ff355 is claimed 00:25:54.428 [2024-12-06 18:24:25.195391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 298f97c3-e937-4a1a-9589-c2f6b553c080 is claimed 00:25:54.428 [2024-12-06 18:24:25.195527] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:54.428 [2024-12-06 18:24:25.195545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:25:54.428 [2024-12-06 18:24:25.195829] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:54.428 [2024-12-06 18:24:25.196009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:54.428 [2024-12-06 18:24:25.196020] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:25:54.428 [2024-12-06 18:24:25.196183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:25:54.428 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.429 [2024-12-06 18:24:25.303438] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.429 [2024-12-06 18:24:25.343326] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:25:54.429 [2024-12-06 18:24:25.343353] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4f69de6d-529a-46d4-938d-adc1762ff355' was resized: old size 131072, new size 204800 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.429 [2024-12-06 18:24:25.355267] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:25:54.429 [2024-12-06 18:24:25.355293] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '298f97c3-e937-4a1a-9589-c2f6b553c080' was resized: old size 131072, new size 204800 00:25:54.429 [2024-12-06 18:24:25.355321] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:25:54.429 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:25:54.687 [2024-12-06 18:24:25.451292] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.687 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.687 [2024-12-06 18:24:25.486995] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:25:54.687 [2024-12-06 18:24:25.487089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:25:54.687 [2024-12-06 18:24:25.487126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:25:54.687 [2024-12-06 18:24:25.487307] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:25:54.688 [2024-12-06 18:24:25.487526] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.688 [2024-12-06 18:24:25.487606] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.688 [2024-12-06 18:24:25.487628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.688 [2024-12-06 18:24:25.498898] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:25:54.688 [2024-12-06 18:24:25.498969] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:25:54.688 [2024-12-06 18:24:25.498997] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:25:54.688 [2024-12-06 18:24:25.499015] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:25:54.688 [2024-12-06 18:24:25.501853] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:25:54.688 [2024-12-06 18:24:25.502020] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:25:54.688 pt0 00:25:54.688 [2024-12-06 18:24:25.503857] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4f69de6d-529a-46d4-938d-adc1762ff355 00:25:54.688 [2024-12-06 18:24:25.504068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4f69de6d-529a-46d4-938d-adc1762ff355 is claimed 00:25:54.688 [2024-12-06 18:24:25.504424] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 298f97c3-e937-4a1a-9589-c2f6b553c080 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.688 [2024-12-06 18:24:25.504614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 298f97c3-e937-4a1a-9589-c2f6b553c080 is claimed 00:25:54.688 [2024-12-06 18:24:25.504757] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 298f97c3-e937-4a1a-9589-c2f6b553c080 (2) smaller than existing raid bdev Raid (3) 00:25:54.688 [2024-12-06 18:24:25.504840] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 4f69de6d-529a-46d4-938d-adc1762ff355: File exists 00:25:54.688 [2024-12-06 18:24:25.504969] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:25:54.688 [2024-12-06 18:24:25.504989] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.688 [2024-12-06 18:24:25.505272] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:25:54.688 [2024-12-06 18:24:25.505449] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:25:54.688 [2024-12-06 18:24:25.505461] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.688 [2024-12-06 18:24:25.505617] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:54.688 [2024-12-06 18:24:25.527065] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59969 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59969 ']' 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59969 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59969 00:25:54.688 killing process with pid 59969 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59969' 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59969 00:25:54.688 [2024-12-06 18:24:25.605453] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:25:54.688 [2024-12-06 18:24:25.605515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:25:54.688 [2024-12-06 18:24:25.605561] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:25:54.688 [2024-12-06 18:24:25.605571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:25:54.688 18:24:25 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59969 00:25:56.599 [2024-12-06 18:24:27.050485] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:25:57.532 ************************************ 00:25:57.532 END TEST raid1_resize_superblock_test 00:25:57.532 ************************************ 00:25:57.532 18:24:28 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:25:57.532 00:25:57.532 real 0m4.661s 00:25:57.532 user 0m4.827s 00:25:57.532 sys 0m0.622s 00:25:57.532 18:24:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:57.532 18:24:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:25:57.532 18:24:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:25:57.532 18:24:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:25:57.532 18:24:28 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:25:57.532 18:24:28 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:25:57.532 18:24:28 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:25:57.532 18:24:28 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:25:57.532 18:24:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:57.532 18:24:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:57.532 18:24:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:25:57.532 ************************************ 00:25:57.532 START TEST raid_function_test_raid0 00:25:57.532 ************************************ 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:25:57.532 Process raid pid: 60066 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60066 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60066' 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60066 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60066 ']' 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:57.532 18:24:28 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:25:57.532 [2024-12-06 18:24:28.394790] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:25:57.533 [2024-12-06 18:24:28.394915] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.790 [2024-12-06 18:24:28.569323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.790 [2024-12-06 18:24:28.687428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:58.049 [2024-12-06 18:24:28.905934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.049 [2024-12-06 18:24:28.906200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:25:58.308 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.308 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:25:58.308 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:25:58.308 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.308 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:25:58.568 Base_1 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:25:58.568 Base_2 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:25:58.568 [2024-12-06 18:24:29.317083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:25:58.568 [2024-12-06 18:24:29.319300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:25:58.568 [2024-12-06 18:24:29.319384] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:25:58.568 [2024-12-06 18:24:29.319399] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:25:58.568 [2024-12-06 18:24:29.319661] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:25:58.568 [2024-12-06 18:24:29.319793] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:25:58.568 [2024-12-06 18:24:29.319803] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:25:58.568 [2024-12-06 18:24:29.319940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:58.568 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:25:58.827 [2024-12-06 18:24:29.564796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:25:58.827 /dev/nbd0 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:25:58.827 1+0 records in 00:25:58.827 1+0 records out 00:25:58.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000381929 s, 10.7 MB/s 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:25:58.827 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:25:59.087 { 00:25:59.087 "nbd_device": "/dev/nbd0", 00:25:59.087 "bdev_name": "raid" 00:25:59.087 } 00:25:59.087 ]' 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:25:59.087 { 00:25:59.087 "nbd_device": "/dev/nbd0", 00:25:59.087 "bdev_name": "raid" 00:25:59.087 } 00:25:59.087 ]' 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:25:59.087 4096+0 records in 00:25:59.087 4096+0 records out 00:25:59.087 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0388368 s, 54.0 MB/s 00:25:59.087 18:24:29 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:25:59.345 4096+0 records in 00:25:59.345 4096+0 records out 00:25:59.345 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.213661 s, 9.8 MB/s 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:25:59.345 128+0 records in 00:25:59.345 128+0 records out 00:25:59.345 65536 bytes (66 kB, 64 KiB) copied, 0.00205333 s, 31.9 MB/s 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:25:59.345 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:25:59.345 2035+0 records in 00:25:59.346 2035+0 records out 00:25:59.346 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.020791 s, 50.1 MB/s 00:25:59.346 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:25:59.605 456+0 records in 00:25:59.605 456+0 records out 00:25:59.605 233472 bytes (233 kB, 228 KiB) copied, 0.00523859 s, 44.6 MB/s 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:25:59.605 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:25:59.864 [2024-12-06 18:24:30.561936] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:25:59.864 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60066 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60066 ']' 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60066 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60066 00:26:00.123 killing process with pid 60066 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60066' 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60066 00:26:00.123 [2024-12-06 18:24:30.899796] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:00.123 [2024-12-06 18:24:30.899893] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:00.123 18:24:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60066 00:26:00.123 [2024-12-06 18:24:30.899943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:00.123 [2024-12-06 18:24:30.899961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:26:00.382 [2024-12-06 18:24:31.111079] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:01.318 ************************************ 00:26:01.318 END TEST raid_function_test_raid0 00:26:01.318 ************************************ 00:26:01.318 18:24:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:26:01.318 00:26:01.318 real 0m3.948s 00:26:01.318 user 0m4.461s 00:26:01.318 sys 0m1.122s 00:26:01.318 18:24:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:01.318 18:24:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:26:01.578 18:24:32 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:26:01.578 18:24:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:01.578 18:24:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:01.578 18:24:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:01.578 ************************************ 00:26:01.578 START TEST raid_function_test_concat 00:26:01.578 ************************************ 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60195 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:01.578 Process raid pid: 60195 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60195' 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60195 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60195 ']' 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:01.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:01.578 18:24:32 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:26:01.578 [2024-12-06 18:24:32.423155] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:01.578 [2024-12-06 18:24:32.423726] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:01.836 [2024-12-06 18:24:32.608285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.837 [2024-12-06 18:24:32.727718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.095 [2024-12-06 18:24:32.939419] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.095 [2024-12-06 18:24:32.939453] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:26:02.354 Base_1 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.354 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:26:02.613 Base_2 00:26:02.613 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.613 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:26:02.613 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.613 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:26:02.613 [2024-12-06 18:24:33.346878] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:26:02.613 [2024-12-06 18:24:33.348918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:26:02.613 [2024-12-06 18:24:33.348988] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:02.614 [2024-12-06 18:24:33.349002] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:02.614 [2024-12-06 18:24:33.349284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:02.614 [2024-12-06 18:24:33.349446] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:02.614 [2024-12-06 18:24:33.349457] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:26:02.614 [2024-12-06 18:24:33.349604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:02.614 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:26:02.873 [2024-12-06 18:24:33.594573] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:26:02.873 /dev/nbd0 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:26:02.873 1+0 records in 00:26:02.873 1+0 records out 00:26:02.873 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349476 s, 11.7 MB/s 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:26:02.873 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:26:03.132 { 00:26:03.132 "nbd_device": "/dev/nbd0", 00:26:03.132 "bdev_name": "raid" 00:26:03.132 } 00:26:03.132 ]' 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:26:03.132 { 00:26:03.132 "nbd_device": "/dev/nbd0", 00:26:03.132 "bdev_name": "raid" 00:26:03.132 } 00:26:03.132 ]' 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:26:03.132 18:24:33 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:26:03.132 4096+0 records in 00:26:03.132 4096+0 records out 00:26:03.132 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0383148 s, 54.7 MB/s 00:26:03.132 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:26:03.391 4096+0 records in 00:26:03.391 4096+0 records out 00:26:03.391 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.282373 s, 7.4 MB/s 00:26:03.391 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:26:03.391 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:26:03.391 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:26:03.650 128+0 records in 00:26:03.650 128+0 records out 00:26:03.650 65536 bytes (66 kB, 64 KiB) copied, 0.00147989 s, 44.3 MB/s 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:26:03.650 2035+0 records in 00:26:03.650 2035+0 records out 00:26:03.650 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0206084 s, 50.6 MB/s 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:26:03.650 456+0 records in 00:26:03.650 456+0 records out 00:26:03.650 233472 bytes (233 kB, 228 KiB) copied, 0.00562861 s, 41.5 MB/s 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:26:03.650 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:26:03.909 [2024-12-06 18:24:34.682592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:26:03.909 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60195 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60195 ']' 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60195 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:04.168 18:24:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60195 00:26:04.168 killing process with pid 60195 00:26:04.168 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:04.168 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:04.168 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60195' 00:26:04.168 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60195 00:26:04.168 [2024-12-06 18:24:35.025664] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:04.168 [2024-12-06 18:24:35.025767] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:04.168 18:24:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60195 00:26:04.168 [2024-12-06 18:24:35.025822] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:04.168 [2024-12-06 18:24:35.025839] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:26:04.427 [2024-12-06 18:24:35.235541] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:05.807 18:24:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:26:05.807 00:26:05.807 real 0m4.055s 00:26:05.807 user 0m4.557s 00:26:05.807 sys 0m1.180s 00:26:05.807 18:24:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:05.807 18:24:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:26:05.807 ************************************ 00:26:05.807 END TEST raid_function_test_concat 00:26:05.807 ************************************ 00:26:05.807 18:24:36 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:26:05.807 18:24:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:05.807 18:24:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:05.807 18:24:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:05.807 ************************************ 00:26:05.807 START TEST raid0_resize_test 00:26:05.807 ************************************ 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:26:05.807 Process raid pid: 60324 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60324 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60324' 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60324 00:26:05.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60324 ']' 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:05.807 18:24:36 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:05.807 [2024-12-06 18:24:36.547565] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:05.807 [2024-12-06 18:24:36.547850] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:05.807 [2024-12-06 18:24:36.731630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.067 [2024-12-06 18:24:36.851582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:06.327 [2024-12-06 18:24:37.063625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:06.327 [2024-12-06 18:24:37.063835] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:06.586 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.587 Base_1 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.587 Base_2 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.587 [2024-12-06 18:24:37.445307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:26:06.587 [2024-12-06 18:24:37.447519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:26:06.587 [2024-12-06 18:24:37.447575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:06.587 [2024-12-06 18:24:37.447589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:06.587 [2024-12-06 18:24:37.447843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:26:06.587 [2024-12-06 18:24:37.447969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:06.587 [2024-12-06 18:24:37.447979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:26:06.587 [2024-12-06 18:24:37.448117] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.587 [2024-12-06 18:24:37.453279] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:26:06.587 [2024-12-06 18:24:37.453308] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:26:06.587 true 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.587 [2024-12-06 18:24:37.465471] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.587 [2024-12-06 18:24:37.509264] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:26:06.587 [2024-12-06 18:24:37.509289] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:26:06.587 [2024-12-06 18:24:37.509323] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:26:06.587 true 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:06.587 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:06.587 [2024-12-06 18:24:37.525451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60324 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60324 ']' 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60324 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60324 00:26:06.848 killing process with pid 60324 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60324' 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60324 00:26:06.848 [2024-12-06 18:24:37.604020] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:06.848 [2024-12-06 18:24:37.604089] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:06.848 [2024-12-06 18:24:37.604133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:06.848 [2024-12-06 18:24:37.604158] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:26:06.848 18:24:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60324 00:26:06.848 [2024-12-06 18:24:37.621640] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:08.222 18:24:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:26:08.222 00:26:08.222 real 0m2.313s 00:26:08.222 user 0m2.438s 00:26:08.222 sys 0m0.407s 00:26:08.222 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.222 ************************************ 00:26:08.222 END TEST raid0_resize_test 00:26:08.222 ************************************ 00:26:08.222 18:24:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.222 18:24:38 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:26:08.222 18:24:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:08.222 18:24:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.222 18:24:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:08.222 ************************************ 00:26:08.222 START TEST raid1_resize_test 00:26:08.222 ************************************ 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60386 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:08.222 Process raid pid: 60386 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60386' 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60386 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60386 ']' 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:08.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:08.222 18:24:38 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:08.222 [2024-12-06 18:24:38.942841] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:08.222 [2024-12-06 18:24:38.942962] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:08.222 [2024-12-06 18:24:39.126803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.480 [2024-12-06 18:24:39.240204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.738 [2024-12-06 18:24:39.460200] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:08.738 [2024-12-06 18:24:39.460242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.005 Base_1 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.005 Base_2 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.005 [2024-12-06 18:24:39.852846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:26:09.005 [2024-12-06 18:24:39.855317] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:26:09.005 [2024-12-06 18:24:39.855379] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:09.005 [2024-12-06 18:24:39.855394] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:26:09.005 [2024-12-06 18:24:39.855658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:26:09.005 [2024-12-06 18:24:39.855787] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:09.005 [2024-12-06 18:24:39.855797] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:26:09.005 [2024-12-06 18:24:39.855945] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.005 [2024-12-06 18:24:39.864821] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:26:09.005 [2024-12-06 18:24:39.864873] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:26:09.005 true 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.005 [2024-12-06 18:24:39.876964] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:26:09.005 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.006 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.006 [2024-12-06 18:24:39.920714] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:26:09.006 [2024-12-06 18:24:39.920740] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:26:09.006 [2024-12-06 18:24:39.920774] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:26:09.006 true 00:26:09.006 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.006 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:26:09.006 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:26:09.006 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:09.006 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:09.006 [2024-12-06 18:24:39.936845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60386 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60386 ']' 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60386 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.265 18:24:39 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60386 00:26:09.265 18:24:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.265 killing process with pid 60386 00:26:09.265 18:24:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.265 18:24:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60386' 00:26:09.265 18:24:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60386 00:26:09.265 [2024-12-06 18:24:40.019025] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:09.265 [2024-12-06 18:24:40.019105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:09.265 18:24:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60386 00:26:09.265 [2024-12-06 18:24:40.019594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:09.265 [2024-12-06 18:24:40.019624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:26:09.265 [2024-12-06 18:24:40.037003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:10.639 18:24:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:26:10.639 00:26:10.639 real 0m2.343s 00:26:10.639 user 0m2.489s 00:26:10.639 sys 0m0.399s 00:26:10.639 18:24:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:10.639 ************************************ 00:26:10.639 END TEST raid1_resize_test 00:26:10.639 ************************************ 00:26:10.639 18:24:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.639 18:24:41 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:26:10.639 18:24:41 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:26:10.639 18:24:41 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:26:10.639 18:24:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:10.639 18:24:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:10.639 18:24:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:10.639 ************************************ 00:26:10.639 START TEST raid_state_function_test 00:26:10.639 ************************************ 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60443 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60443' 00:26:10.639 Process raid pid: 60443 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60443 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60443 ']' 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:10.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:10.639 18:24:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:10.639 [2024-12-06 18:24:41.388794] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:10.639 [2024-12-06 18:24:41.389180] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:10.639 [2024-12-06 18:24:41.581555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.899 [2024-12-06 18:24:41.704744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:11.158 [2024-12-06 18:24:41.919242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:11.158 [2024-12-06 18:24:41.919279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.418 [2024-12-06 18:24:42.250610] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.418 [2024-12-06 18:24:42.250687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.418 [2024-12-06 18:24:42.250699] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.418 [2024-12-06 18:24:42.250712] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.418 "name": "Existed_Raid", 00:26:11.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.418 "strip_size_kb": 64, 00:26:11.418 "state": "configuring", 00:26:11.418 "raid_level": "raid0", 00:26:11.418 "superblock": false, 00:26:11.418 "num_base_bdevs": 2, 00:26:11.418 "num_base_bdevs_discovered": 0, 00:26:11.418 "num_base_bdevs_operational": 2, 00:26:11.418 "base_bdevs_list": [ 00:26:11.418 { 00:26:11.418 "name": "BaseBdev1", 00:26:11.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.418 "is_configured": false, 00:26:11.418 "data_offset": 0, 00:26:11.418 "data_size": 0 00:26:11.418 }, 00:26:11.418 { 00:26:11.418 "name": "BaseBdev2", 00:26:11.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.418 "is_configured": false, 00:26:11.418 "data_offset": 0, 00:26:11.418 "data_size": 0 00:26:11.418 } 00:26:11.418 ] 00:26:11.418 }' 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.418 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.986 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:11.986 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.986 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.986 [2024-12-06 18:24:42.674005] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:11.986 [2024-12-06 18:24:42.674049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:11.986 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.986 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:11.986 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.986 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.986 [2024-12-06 18:24:42.685992] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:11.986 [2024-12-06 18:24:42.686189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:11.986 [2024-12-06 18:24:42.686211] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:11.986 [2024-12-06 18:24:42.686231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.987 [2024-12-06 18:24:42.737602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:11.987 BaseBdev1 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.987 [ 00:26:11.987 { 00:26:11.987 "name": "BaseBdev1", 00:26:11.987 "aliases": [ 00:26:11.987 "94990e68-683b-4d39-ac49-0093621ff1d0" 00:26:11.987 ], 00:26:11.987 "product_name": "Malloc disk", 00:26:11.987 "block_size": 512, 00:26:11.987 "num_blocks": 65536, 00:26:11.987 "uuid": "94990e68-683b-4d39-ac49-0093621ff1d0", 00:26:11.987 "assigned_rate_limits": { 00:26:11.987 "rw_ios_per_sec": 0, 00:26:11.987 "rw_mbytes_per_sec": 0, 00:26:11.987 "r_mbytes_per_sec": 0, 00:26:11.987 "w_mbytes_per_sec": 0 00:26:11.987 }, 00:26:11.987 "claimed": true, 00:26:11.987 "claim_type": "exclusive_write", 00:26:11.987 "zoned": false, 00:26:11.987 "supported_io_types": { 00:26:11.987 "read": true, 00:26:11.987 "write": true, 00:26:11.987 "unmap": true, 00:26:11.987 "flush": true, 00:26:11.987 "reset": true, 00:26:11.987 "nvme_admin": false, 00:26:11.987 "nvme_io": false, 00:26:11.987 "nvme_io_md": false, 00:26:11.987 "write_zeroes": true, 00:26:11.987 "zcopy": true, 00:26:11.987 "get_zone_info": false, 00:26:11.987 "zone_management": false, 00:26:11.987 "zone_append": false, 00:26:11.987 "compare": false, 00:26:11.987 "compare_and_write": false, 00:26:11.987 "abort": true, 00:26:11.987 "seek_hole": false, 00:26:11.987 "seek_data": false, 00:26:11.987 "copy": true, 00:26:11.987 "nvme_iov_md": false 00:26:11.987 }, 00:26:11.987 "memory_domains": [ 00:26:11.987 { 00:26:11.987 "dma_device_id": "system", 00:26:11.987 "dma_device_type": 1 00:26:11.987 }, 00:26:11.987 { 00:26:11.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:11.987 "dma_device_type": 2 00:26:11.987 } 00:26:11.987 ], 00:26:11.987 "driver_specific": {} 00:26:11.987 } 00:26:11.987 ] 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:11.987 "name": "Existed_Raid", 00:26:11.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.987 "strip_size_kb": 64, 00:26:11.987 "state": "configuring", 00:26:11.987 "raid_level": "raid0", 00:26:11.987 "superblock": false, 00:26:11.987 "num_base_bdevs": 2, 00:26:11.987 "num_base_bdevs_discovered": 1, 00:26:11.987 "num_base_bdevs_operational": 2, 00:26:11.987 "base_bdevs_list": [ 00:26:11.987 { 00:26:11.987 "name": "BaseBdev1", 00:26:11.987 "uuid": "94990e68-683b-4d39-ac49-0093621ff1d0", 00:26:11.987 "is_configured": true, 00:26:11.987 "data_offset": 0, 00:26:11.987 "data_size": 65536 00:26:11.987 }, 00:26:11.987 { 00:26:11.987 "name": "BaseBdev2", 00:26:11.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:11.987 "is_configured": false, 00:26:11.987 "data_offset": 0, 00:26:11.987 "data_size": 0 00:26:11.987 } 00:26:11.987 ] 00:26:11.987 }' 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:11.987 18:24:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.556 [2024-12-06 18:24:43.217512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:12.556 [2024-12-06 18:24:43.217765] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.556 [2024-12-06 18:24:43.229527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:12.556 [2024-12-06 18:24:43.231794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:12.556 [2024-12-06 18:24:43.231843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:12.556 "name": "Existed_Raid", 00:26:12.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.556 "strip_size_kb": 64, 00:26:12.556 "state": "configuring", 00:26:12.556 "raid_level": "raid0", 00:26:12.556 "superblock": false, 00:26:12.556 "num_base_bdevs": 2, 00:26:12.556 "num_base_bdevs_discovered": 1, 00:26:12.556 "num_base_bdevs_operational": 2, 00:26:12.556 "base_bdevs_list": [ 00:26:12.556 { 00:26:12.556 "name": "BaseBdev1", 00:26:12.556 "uuid": "94990e68-683b-4d39-ac49-0093621ff1d0", 00:26:12.556 "is_configured": true, 00:26:12.556 "data_offset": 0, 00:26:12.556 "data_size": 65536 00:26:12.556 }, 00:26:12.556 { 00:26:12.556 "name": "BaseBdev2", 00:26:12.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:12.556 "is_configured": false, 00:26:12.556 "data_offset": 0, 00:26:12.556 "data_size": 0 00:26:12.556 } 00:26:12.556 ] 00:26:12.556 }' 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:12.556 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.815 [2024-12-06 18:24:43.672358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:12.815 [2024-12-06 18:24:43.672562] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:12.815 [2024-12-06 18:24:43.672608] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:12.815 [2024-12-06 18:24:43.672974] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:12.815 [2024-12-06 18:24:43.673176] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:12.815 [2024-12-06 18:24:43.673193] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:12.815 [2024-12-06 18:24:43.673472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:12.815 BaseBdev2 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:12.815 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.816 [ 00:26:12.816 { 00:26:12.816 "name": "BaseBdev2", 00:26:12.816 "aliases": [ 00:26:12.816 "daa12642-37ba-4b3d-8988-3420e0520dac" 00:26:12.816 ], 00:26:12.816 "product_name": "Malloc disk", 00:26:12.816 "block_size": 512, 00:26:12.816 "num_blocks": 65536, 00:26:12.816 "uuid": "daa12642-37ba-4b3d-8988-3420e0520dac", 00:26:12.816 "assigned_rate_limits": { 00:26:12.816 "rw_ios_per_sec": 0, 00:26:12.816 "rw_mbytes_per_sec": 0, 00:26:12.816 "r_mbytes_per_sec": 0, 00:26:12.816 "w_mbytes_per_sec": 0 00:26:12.816 }, 00:26:12.816 "claimed": true, 00:26:12.816 "claim_type": "exclusive_write", 00:26:12.816 "zoned": false, 00:26:12.816 "supported_io_types": { 00:26:12.816 "read": true, 00:26:12.816 "write": true, 00:26:12.816 "unmap": true, 00:26:12.816 "flush": true, 00:26:12.816 "reset": true, 00:26:12.816 "nvme_admin": false, 00:26:12.816 "nvme_io": false, 00:26:12.816 "nvme_io_md": false, 00:26:12.816 "write_zeroes": true, 00:26:12.816 "zcopy": true, 00:26:12.816 "get_zone_info": false, 00:26:12.816 "zone_management": false, 00:26:12.816 "zone_append": false, 00:26:12.816 "compare": false, 00:26:12.816 "compare_and_write": false, 00:26:12.816 "abort": true, 00:26:12.816 "seek_hole": false, 00:26:12.816 "seek_data": false, 00:26:12.816 "copy": true, 00:26:12.816 "nvme_iov_md": false 00:26:12.816 }, 00:26:12.816 "memory_domains": [ 00:26:12.816 { 00:26:12.816 "dma_device_id": "system", 00:26:12.816 "dma_device_type": 1 00:26:12.816 }, 00:26:12.816 { 00:26:12.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:12.816 "dma_device_type": 2 00:26:12.816 } 00:26:12.816 ], 00:26:12.816 "driver_specific": {} 00:26:12.816 } 00:26:12.816 ] 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:12.816 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.075 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:13.075 "name": "Existed_Raid", 00:26:13.075 "uuid": "363745b1-19d5-4c6b-9adb-476ccdd12bb7", 00:26:13.075 "strip_size_kb": 64, 00:26:13.075 "state": "online", 00:26:13.075 "raid_level": "raid0", 00:26:13.075 "superblock": false, 00:26:13.075 "num_base_bdevs": 2, 00:26:13.075 "num_base_bdevs_discovered": 2, 00:26:13.075 "num_base_bdevs_operational": 2, 00:26:13.075 "base_bdevs_list": [ 00:26:13.075 { 00:26:13.075 "name": "BaseBdev1", 00:26:13.075 "uuid": "94990e68-683b-4d39-ac49-0093621ff1d0", 00:26:13.075 "is_configured": true, 00:26:13.075 "data_offset": 0, 00:26:13.075 "data_size": 65536 00:26:13.075 }, 00:26:13.075 { 00:26:13.075 "name": "BaseBdev2", 00:26:13.075 "uuid": "daa12642-37ba-4b3d-8988-3420e0520dac", 00:26:13.075 "is_configured": true, 00:26:13.075 "data_offset": 0, 00:26:13.075 "data_size": 65536 00:26:13.075 } 00:26:13.075 ] 00:26:13.075 }' 00:26:13.075 18:24:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:13.075 18:24:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.334 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.335 [2024-12-06 18:24:44.164143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:13.335 "name": "Existed_Raid", 00:26:13.335 "aliases": [ 00:26:13.335 "363745b1-19d5-4c6b-9adb-476ccdd12bb7" 00:26:13.335 ], 00:26:13.335 "product_name": "Raid Volume", 00:26:13.335 "block_size": 512, 00:26:13.335 "num_blocks": 131072, 00:26:13.335 "uuid": "363745b1-19d5-4c6b-9adb-476ccdd12bb7", 00:26:13.335 "assigned_rate_limits": { 00:26:13.335 "rw_ios_per_sec": 0, 00:26:13.335 "rw_mbytes_per_sec": 0, 00:26:13.335 "r_mbytes_per_sec": 0, 00:26:13.335 "w_mbytes_per_sec": 0 00:26:13.335 }, 00:26:13.335 "claimed": false, 00:26:13.335 "zoned": false, 00:26:13.335 "supported_io_types": { 00:26:13.335 "read": true, 00:26:13.335 "write": true, 00:26:13.335 "unmap": true, 00:26:13.335 "flush": true, 00:26:13.335 "reset": true, 00:26:13.335 "nvme_admin": false, 00:26:13.335 "nvme_io": false, 00:26:13.335 "nvme_io_md": false, 00:26:13.335 "write_zeroes": true, 00:26:13.335 "zcopy": false, 00:26:13.335 "get_zone_info": false, 00:26:13.335 "zone_management": false, 00:26:13.335 "zone_append": false, 00:26:13.335 "compare": false, 00:26:13.335 "compare_and_write": false, 00:26:13.335 "abort": false, 00:26:13.335 "seek_hole": false, 00:26:13.335 "seek_data": false, 00:26:13.335 "copy": false, 00:26:13.335 "nvme_iov_md": false 00:26:13.335 }, 00:26:13.335 "memory_domains": [ 00:26:13.335 { 00:26:13.335 "dma_device_id": "system", 00:26:13.335 "dma_device_type": 1 00:26:13.335 }, 00:26:13.335 { 00:26:13.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.335 "dma_device_type": 2 00:26:13.335 }, 00:26:13.335 { 00:26:13.335 "dma_device_id": "system", 00:26:13.335 "dma_device_type": 1 00:26:13.335 }, 00:26:13.335 { 00:26:13.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:13.335 "dma_device_type": 2 00:26:13.335 } 00:26:13.335 ], 00:26:13.335 "driver_specific": { 00:26:13.335 "raid": { 00:26:13.335 "uuid": "363745b1-19d5-4c6b-9adb-476ccdd12bb7", 00:26:13.335 "strip_size_kb": 64, 00:26:13.335 "state": "online", 00:26:13.335 "raid_level": "raid0", 00:26:13.335 "superblock": false, 00:26:13.335 "num_base_bdevs": 2, 00:26:13.335 "num_base_bdevs_discovered": 2, 00:26:13.335 "num_base_bdevs_operational": 2, 00:26:13.335 "base_bdevs_list": [ 00:26:13.335 { 00:26:13.335 "name": "BaseBdev1", 00:26:13.335 "uuid": "94990e68-683b-4d39-ac49-0093621ff1d0", 00:26:13.335 "is_configured": true, 00:26:13.335 "data_offset": 0, 00:26:13.335 "data_size": 65536 00:26:13.335 }, 00:26:13.335 { 00:26:13.335 "name": "BaseBdev2", 00:26:13.335 "uuid": "daa12642-37ba-4b3d-8988-3420e0520dac", 00:26:13.335 "is_configured": true, 00:26:13.335 "data_offset": 0, 00:26:13.335 "data_size": 65536 00:26:13.335 } 00:26:13.335 ] 00:26:13.335 } 00:26:13.335 } 00:26:13.335 }' 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:13.335 BaseBdev2' 00:26:13.335 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.595 [2024-12-06 18:24:44.399585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:13.595 [2024-12-06 18:24:44.399625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:13.595 [2024-12-06 18:24:44.399678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:26:13.595 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:13.596 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.855 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:13.855 "name": "Existed_Raid", 00:26:13.855 "uuid": "363745b1-19d5-4c6b-9adb-476ccdd12bb7", 00:26:13.855 "strip_size_kb": 64, 00:26:13.855 "state": "offline", 00:26:13.856 "raid_level": "raid0", 00:26:13.856 "superblock": false, 00:26:13.856 "num_base_bdevs": 2, 00:26:13.856 "num_base_bdevs_discovered": 1, 00:26:13.856 "num_base_bdevs_operational": 1, 00:26:13.856 "base_bdevs_list": [ 00:26:13.856 { 00:26:13.856 "name": null, 00:26:13.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:13.856 "is_configured": false, 00:26:13.856 "data_offset": 0, 00:26:13.856 "data_size": 65536 00:26:13.856 }, 00:26:13.856 { 00:26:13.856 "name": "BaseBdev2", 00:26:13.856 "uuid": "daa12642-37ba-4b3d-8988-3420e0520dac", 00:26:13.856 "is_configured": true, 00:26:13.856 "data_offset": 0, 00:26:13.856 "data_size": 65536 00:26:13.856 } 00:26:13.856 ] 00:26:13.856 }' 00:26:13.856 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:13.856 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.115 18:24:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.115 [2024-12-06 18:24:44.987093] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:14.115 [2024-12-06 18:24:44.987151] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60443 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60443 ']' 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60443 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60443 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:14.374 killing process with pid 60443 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60443' 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60443 00:26:14.374 [2024-12-06 18:24:45.181776] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:14.374 18:24:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60443 00:26:14.374 [2024-12-06 18:24:45.198802] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:15.791 00:26:15.791 real 0m5.071s 00:26:15.791 user 0m7.250s 00:26:15.791 sys 0m0.949s 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:15.791 ************************************ 00:26:15.791 END TEST raid_state_function_test 00:26:15.791 ************************************ 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:15.791 18:24:46 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:26:15.791 18:24:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:15.791 18:24:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:15.791 18:24:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:15.791 ************************************ 00:26:15.791 START TEST raid_state_function_test_sb 00:26:15.791 ************************************ 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60696 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60696' 00:26:15.791 Process raid pid: 60696 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:15.791 18:24:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60696 00:26:15.792 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60696 ']' 00:26:15.792 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.792 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.792 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.792 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.792 18:24:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:15.792 [2024-12-06 18:24:46.525730] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:15.792 [2024-12-06 18:24:46.526050] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:15.792 [2024-12-06 18:24:46.698215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.051 [2024-12-06 18:24:46.813944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:16.309 [2024-12-06 18:24:47.029998] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:16.309 [2024-12-06 18:24:47.030035] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.569 [2024-12-06 18:24:47.372657] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:16.569 [2024-12-06 18:24:47.372909] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:16.569 [2024-12-06 18:24:47.372935] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:16.569 [2024-12-06 18:24:47.372951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:16.569 "name": "Existed_Raid", 00:26:16.569 "uuid": "baf826f6-9184-4652-8478-1fc43872354d", 00:26:16.569 "strip_size_kb": 64, 00:26:16.569 "state": "configuring", 00:26:16.569 "raid_level": "raid0", 00:26:16.569 "superblock": true, 00:26:16.569 "num_base_bdevs": 2, 00:26:16.569 "num_base_bdevs_discovered": 0, 00:26:16.569 "num_base_bdevs_operational": 2, 00:26:16.569 "base_bdevs_list": [ 00:26:16.569 { 00:26:16.569 "name": "BaseBdev1", 00:26:16.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.569 "is_configured": false, 00:26:16.569 "data_offset": 0, 00:26:16.569 "data_size": 0 00:26:16.569 }, 00:26:16.569 { 00:26:16.569 "name": "BaseBdev2", 00:26:16.569 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:16.569 "is_configured": false, 00:26:16.569 "data_offset": 0, 00:26:16.569 "data_size": 0 00:26:16.569 } 00:26:16.569 ] 00:26:16.569 }' 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:16.569 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.137 [2024-12-06 18:24:47.792064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:17.137 [2024-12-06 18:24:47.792107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.137 [2024-12-06 18:24:47.804033] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:17.137 [2024-12-06 18:24:47.804099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:17.137 [2024-12-06 18:24:47.804110] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:17.137 [2024-12-06 18:24:47.804126] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.137 [2024-12-06 18:24:47.854654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:17.137 BaseBdev1 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:17.137 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.138 [ 00:26:17.138 { 00:26:17.138 "name": "BaseBdev1", 00:26:17.138 "aliases": [ 00:26:17.138 "a6d38760-f51d-41af-885a-54e947a26245" 00:26:17.138 ], 00:26:17.138 "product_name": "Malloc disk", 00:26:17.138 "block_size": 512, 00:26:17.138 "num_blocks": 65536, 00:26:17.138 "uuid": "a6d38760-f51d-41af-885a-54e947a26245", 00:26:17.138 "assigned_rate_limits": { 00:26:17.138 "rw_ios_per_sec": 0, 00:26:17.138 "rw_mbytes_per_sec": 0, 00:26:17.138 "r_mbytes_per_sec": 0, 00:26:17.138 "w_mbytes_per_sec": 0 00:26:17.138 }, 00:26:17.138 "claimed": true, 00:26:17.138 "claim_type": "exclusive_write", 00:26:17.138 "zoned": false, 00:26:17.138 "supported_io_types": { 00:26:17.138 "read": true, 00:26:17.138 "write": true, 00:26:17.138 "unmap": true, 00:26:17.138 "flush": true, 00:26:17.138 "reset": true, 00:26:17.138 "nvme_admin": false, 00:26:17.138 "nvme_io": false, 00:26:17.138 "nvme_io_md": false, 00:26:17.138 "write_zeroes": true, 00:26:17.138 "zcopy": true, 00:26:17.138 "get_zone_info": false, 00:26:17.138 "zone_management": false, 00:26:17.138 "zone_append": false, 00:26:17.138 "compare": false, 00:26:17.138 "compare_and_write": false, 00:26:17.138 "abort": true, 00:26:17.138 "seek_hole": false, 00:26:17.138 "seek_data": false, 00:26:17.138 "copy": true, 00:26:17.138 "nvme_iov_md": false 00:26:17.138 }, 00:26:17.138 "memory_domains": [ 00:26:17.138 { 00:26:17.138 "dma_device_id": "system", 00:26:17.138 "dma_device_type": 1 00:26:17.138 }, 00:26:17.138 { 00:26:17.138 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.138 "dma_device_type": 2 00:26:17.138 } 00:26:17.138 ], 00:26:17.138 "driver_specific": {} 00:26:17.138 } 00:26:17.138 ] 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.138 "name": "Existed_Raid", 00:26:17.138 "uuid": "463e3b84-8c96-45e9-912d-0d24d3ab52fe", 00:26:17.138 "strip_size_kb": 64, 00:26:17.138 "state": "configuring", 00:26:17.138 "raid_level": "raid0", 00:26:17.138 "superblock": true, 00:26:17.138 "num_base_bdevs": 2, 00:26:17.138 "num_base_bdevs_discovered": 1, 00:26:17.138 "num_base_bdevs_operational": 2, 00:26:17.138 "base_bdevs_list": [ 00:26:17.138 { 00:26:17.138 "name": "BaseBdev1", 00:26:17.138 "uuid": "a6d38760-f51d-41af-885a-54e947a26245", 00:26:17.138 "is_configured": true, 00:26:17.138 "data_offset": 2048, 00:26:17.138 "data_size": 63488 00:26:17.138 }, 00:26:17.138 { 00:26:17.138 "name": "BaseBdev2", 00:26:17.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.138 "is_configured": false, 00:26:17.138 "data_offset": 0, 00:26:17.138 "data_size": 0 00:26:17.138 } 00:26:17.138 ] 00:26:17.138 }' 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.138 18:24:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.397 [2024-12-06 18:24:48.298092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:17.397 [2024-12-06 18:24:48.298176] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.397 [2024-12-06 18:24:48.310119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:17.397 [2024-12-06 18:24:48.312374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:17.397 [2024-12-06 18:24:48.312418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.397 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.398 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.398 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.398 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.398 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.398 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.398 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.398 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.657 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.657 "name": "Existed_Raid", 00:26:17.657 "uuid": "8eeade3d-86c5-4b82-b73d-a2a089f2e842", 00:26:17.657 "strip_size_kb": 64, 00:26:17.657 "state": "configuring", 00:26:17.657 "raid_level": "raid0", 00:26:17.657 "superblock": true, 00:26:17.657 "num_base_bdevs": 2, 00:26:17.657 "num_base_bdevs_discovered": 1, 00:26:17.657 "num_base_bdevs_operational": 2, 00:26:17.657 "base_bdevs_list": [ 00:26:17.657 { 00:26:17.657 "name": "BaseBdev1", 00:26:17.657 "uuid": "a6d38760-f51d-41af-885a-54e947a26245", 00:26:17.657 "is_configured": true, 00:26:17.657 "data_offset": 2048, 00:26:17.657 "data_size": 63488 00:26:17.657 }, 00:26:17.657 { 00:26:17.657 "name": "BaseBdev2", 00:26:17.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:17.657 "is_configured": false, 00:26:17.657 "data_offset": 0, 00:26:17.657 "data_size": 0 00:26:17.657 } 00:26:17.657 ] 00:26:17.657 }' 00:26:17.657 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.657 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.917 [2024-12-06 18:24:48.772611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:17.917 [2024-12-06 18:24:48.773187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:17.917 BaseBdev2 00:26:17.917 [2024-12-06 18:24:48.773324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:17.917 [2024-12-06 18:24:48.773654] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:17.917 [2024-12-06 18:24:48.773817] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:17.917 [2024-12-06 18:24:48.773835] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:17.917 [2024-12-06 18:24:48.774006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.917 [ 00:26:17.917 { 00:26:17.917 "name": "BaseBdev2", 00:26:17.917 "aliases": [ 00:26:17.917 "38457750-0b22-4fa9-9136-cbd5bbc54ca7" 00:26:17.917 ], 00:26:17.917 "product_name": "Malloc disk", 00:26:17.917 "block_size": 512, 00:26:17.917 "num_blocks": 65536, 00:26:17.917 "uuid": "38457750-0b22-4fa9-9136-cbd5bbc54ca7", 00:26:17.917 "assigned_rate_limits": { 00:26:17.917 "rw_ios_per_sec": 0, 00:26:17.917 "rw_mbytes_per_sec": 0, 00:26:17.917 "r_mbytes_per_sec": 0, 00:26:17.917 "w_mbytes_per_sec": 0 00:26:17.917 }, 00:26:17.917 "claimed": true, 00:26:17.917 "claim_type": "exclusive_write", 00:26:17.917 "zoned": false, 00:26:17.917 "supported_io_types": { 00:26:17.917 "read": true, 00:26:17.917 "write": true, 00:26:17.917 "unmap": true, 00:26:17.917 "flush": true, 00:26:17.917 "reset": true, 00:26:17.917 "nvme_admin": false, 00:26:17.917 "nvme_io": false, 00:26:17.917 "nvme_io_md": false, 00:26:17.917 "write_zeroes": true, 00:26:17.917 "zcopy": true, 00:26:17.917 "get_zone_info": false, 00:26:17.917 "zone_management": false, 00:26:17.917 "zone_append": false, 00:26:17.917 "compare": false, 00:26:17.917 "compare_and_write": false, 00:26:17.917 "abort": true, 00:26:17.917 "seek_hole": false, 00:26:17.917 "seek_data": false, 00:26:17.917 "copy": true, 00:26:17.917 "nvme_iov_md": false 00:26:17.917 }, 00:26:17.917 "memory_domains": [ 00:26:17.917 { 00:26:17.917 "dma_device_id": "system", 00:26:17.917 "dma_device_type": 1 00:26:17.917 }, 00:26:17.917 { 00:26:17.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:17.917 "dma_device_type": 2 00:26:17.917 } 00:26:17.917 ], 00:26:17.917 "driver_specific": {} 00:26:17.917 } 00:26:17.917 ] 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:17.917 "name": "Existed_Raid", 00:26:17.917 "uuid": "8eeade3d-86c5-4b82-b73d-a2a089f2e842", 00:26:17.917 "strip_size_kb": 64, 00:26:17.917 "state": "online", 00:26:17.917 "raid_level": "raid0", 00:26:17.917 "superblock": true, 00:26:17.917 "num_base_bdevs": 2, 00:26:17.917 "num_base_bdevs_discovered": 2, 00:26:17.917 "num_base_bdevs_operational": 2, 00:26:17.917 "base_bdevs_list": [ 00:26:17.917 { 00:26:17.917 "name": "BaseBdev1", 00:26:17.917 "uuid": "a6d38760-f51d-41af-885a-54e947a26245", 00:26:17.917 "is_configured": true, 00:26:17.917 "data_offset": 2048, 00:26:17.917 "data_size": 63488 00:26:17.917 }, 00:26:17.917 { 00:26:17.917 "name": "BaseBdev2", 00:26:17.917 "uuid": "38457750-0b22-4fa9-9136-cbd5bbc54ca7", 00:26:17.917 "is_configured": true, 00:26:17.917 "data_offset": 2048, 00:26:17.917 "data_size": 63488 00:26:17.917 } 00:26:17.917 ] 00:26:17.917 }' 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:17.917 18:24:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.487 [2024-12-06 18:24:49.240350] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:18.487 "name": "Existed_Raid", 00:26:18.487 "aliases": [ 00:26:18.487 "8eeade3d-86c5-4b82-b73d-a2a089f2e842" 00:26:18.487 ], 00:26:18.487 "product_name": "Raid Volume", 00:26:18.487 "block_size": 512, 00:26:18.487 "num_blocks": 126976, 00:26:18.487 "uuid": "8eeade3d-86c5-4b82-b73d-a2a089f2e842", 00:26:18.487 "assigned_rate_limits": { 00:26:18.487 "rw_ios_per_sec": 0, 00:26:18.487 "rw_mbytes_per_sec": 0, 00:26:18.487 "r_mbytes_per_sec": 0, 00:26:18.487 "w_mbytes_per_sec": 0 00:26:18.487 }, 00:26:18.487 "claimed": false, 00:26:18.487 "zoned": false, 00:26:18.487 "supported_io_types": { 00:26:18.487 "read": true, 00:26:18.487 "write": true, 00:26:18.487 "unmap": true, 00:26:18.487 "flush": true, 00:26:18.487 "reset": true, 00:26:18.487 "nvme_admin": false, 00:26:18.487 "nvme_io": false, 00:26:18.487 "nvme_io_md": false, 00:26:18.487 "write_zeroes": true, 00:26:18.487 "zcopy": false, 00:26:18.487 "get_zone_info": false, 00:26:18.487 "zone_management": false, 00:26:18.487 "zone_append": false, 00:26:18.487 "compare": false, 00:26:18.487 "compare_and_write": false, 00:26:18.487 "abort": false, 00:26:18.487 "seek_hole": false, 00:26:18.487 "seek_data": false, 00:26:18.487 "copy": false, 00:26:18.487 "nvme_iov_md": false 00:26:18.487 }, 00:26:18.487 "memory_domains": [ 00:26:18.487 { 00:26:18.487 "dma_device_id": "system", 00:26:18.487 "dma_device_type": 1 00:26:18.487 }, 00:26:18.487 { 00:26:18.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.487 "dma_device_type": 2 00:26:18.487 }, 00:26:18.487 { 00:26:18.487 "dma_device_id": "system", 00:26:18.487 "dma_device_type": 1 00:26:18.487 }, 00:26:18.487 { 00:26:18.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:18.487 "dma_device_type": 2 00:26:18.487 } 00:26:18.487 ], 00:26:18.487 "driver_specific": { 00:26:18.487 "raid": { 00:26:18.487 "uuid": "8eeade3d-86c5-4b82-b73d-a2a089f2e842", 00:26:18.487 "strip_size_kb": 64, 00:26:18.487 "state": "online", 00:26:18.487 "raid_level": "raid0", 00:26:18.487 "superblock": true, 00:26:18.487 "num_base_bdevs": 2, 00:26:18.487 "num_base_bdevs_discovered": 2, 00:26:18.487 "num_base_bdevs_operational": 2, 00:26:18.487 "base_bdevs_list": [ 00:26:18.487 { 00:26:18.487 "name": "BaseBdev1", 00:26:18.487 "uuid": "a6d38760-f51d-41af-885a-54e947a26245", 00:26:18.487 "is_configured": true, 00:26:18.487 "data_offset": 2048, 00:26:18.487 "data_size": 63488 00:26:18.487 }, 00:26:18.487 { 00:26:18.487 "name": "BaseBdev2", 00:26:18.487 "uuid": "38457750-0b22-4fa9-9136-cbd5bbc54ca7", 00:26:18.487 "is_configured": true, 00:26:18.487 "data_offset": 2048, 00:26:18.487 "data_size": 63488 00:26:18.487 } 00:26:18.487 ] 00:26:18.487 } 00:26:18.487 } 00:26:18.487 }' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:18.487 BaseBdev2' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.487 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.747 [2024-12-06 18:24:49.471790] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:18.747 [2024-12-06 18:24:49.471829] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:18.747 [2024-12-06 18:24:49.471879] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:18.747 "name": "Existed_Raid", 00:26:18.747 "uuid": "8eeade3d-86c5-4b82-b73d-a2a089f2e842", 00:26:18.747 "strip_size_kb": 64, 00:26:18.747 "state": "offline", 00:26:18.747 "raid_level": "raid0", 00:26:18.747 "superblock": true, 00:26:18.747 "num_base_bdevs": 2, 00:26:18.747 "num_base_bdevs_discovered": 1, 00:26:18.747 "num_base_bdevs_operational": 1, 00:26:18.747 "base_bdevs_list": [ 00:26:18.747 { 00:26:18.747 "name": null, 00:26:18.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:18.747 "is_configured": false, 00:26:18.747 "data_offset": 0, 00:26:18.747 "data_size": 63488 00:26:18.747 }, 00:26:18.747 { 00:26:18.747 "name": "BaseBdev2", 00:26:18.747 "uuid": "38457750-0b22-4fa9-9136-cbd5bbc54ca7", 00:26:18.747 "is_configured": true, 00:26:18.747 "data_offset": 2048, 00:26:18.747 "data_size": 63488 00:26:18.747 } 00:26:18.747 ] 00:26:18.747 }' 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:18.747 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.317 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:19.317 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:19.317 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.317 18:24:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:19.317 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.317 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.317 18:24:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.317 [2024-12-06 18:24:50.023036] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:19.317 [2024-12-06 18:24:50.023107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60696 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60696 ']' 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60696 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60696 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60696' 00:26:19.317 killing process with pid 60696 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60696 00:26:19.317 [2024-12-06 18:24:50.217511] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:19.317 18:24:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60696 00:26:19.317 [2024-12-06 18:24:50.234981] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:20.692 ************************************ 00:26:20.692 END TEST raid_state_function_test_sb 00:26:20.692 ************************************ 00:26:20.692 18:24:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:20.692 00:26:20.692 real 0m4.955s 00:26:20.692 user 0m7.054s 00:26:20.692 sys 0m0.926s 00:26:20.692 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.692 18:24:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:20.692 18:24:51 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:26:20.692 18:24:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:20.692 18:24:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.692 18:24:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:20.692 ************************************ 00:26:20.692 START TEST raid_superblock_test 00:26:20.692 ************************************ 00:26:20.692 18:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:26:20.692 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:26:20.692 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:26:20.692 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:20.692 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:20.692 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:20.692 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:20.692 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=60948 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 60948 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60948 ']' 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.693 18:24:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:20.693 [2024-12-06 18:24:51.547243] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:20.693 [2024-12-06 18:24:51.547373] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60948 ] 00:26:20.951 [2024-12-06 18:24:51.721898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.951 [2024-12-06 18:24:51.835025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.209 [2024-12-06 18:24:52.038066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.209 [2024-12-06 18:24:52.038112] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.468 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 malloc1 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 [2024-12-06 18:24:52.429606] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:21.726 [2024-12-06 18:24:52.429690] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.726 [2024-12-06 18:24:52.429716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:21.726 [2024-12-06 18:24:52.429729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.726 [2024-12-06 18:24:52.432126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.726 [2024-12-06 18:24:52.432184] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:21.726 pt1 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 malloc2 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 [2024-12-06 18:24:52.486338] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:21.726 [2024-12-06 18:24:52.486403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:21.726 [2024-12-06 18:24:52.486446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:21.726 [2024-12-06 18:24:52.486458] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:21.726 [2024-12-06 18:24:52.488906] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:21.726 [2024-12-06 18:24:52.488953] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:21.726 pt2 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 [2024-12-06 18:24:52.498380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:21.726 [2024-12-06 18:24:52.500623] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:21.726 [2024-12-06 18:24:52.500795] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:21.726 [2024-12-06 18:24:52.500809] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:21.726 [2024-12-06 18:24:52.501084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:21.726 [2024-12-06 18:24:52.501249] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:21.726 [2024-12-06 18:24:52.501263] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:21.726 [2024-12-06 18:24:52.501425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:21.726 "name": "raid_bdev1", 00:26:21.726 "uuid": "2d4288a9-9a13-46f0-bb90-520ff397d5e3", 00:26:21.726 "strip_size_kb": 64, 00:26:21.726 "state": "online", 00:26:21.726 "raid_level": "raid0", 00:26:21.726 "superblock": true, 00:26:21.726 "num_base_bdevs": 2, 00:26:21.726 "num_base_bdevs_discovered": 2, 00:26:21.726 "num_base_bdevs_operational": 2, 00:26:21.726 "base_bdevs_list": [ 00:26:21.726 { 00:26:21.726 "name": "pt1", 00:26:21.726 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:21.726 "is_configured": true, 00:26:21.726 "data_offset": 2048, 00:26:21.726 "data_size": 63488 00:26:21.726 }, 00:26:21.726 { 00:26:21.726 "name": "pt2", 00:26:21.726 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:21.726 "is_configured": true, 00:26:21.726 "data_offset": 2048, 00:26:21.726 "data_size": 63488 00:26:21.726 } 00:26:21.726 ] 00:26:21.726 }' 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:21.726 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:21.985 [2024-12-06 18:24:52.886115] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.985 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:21.985 "name": "raid_bdev1", 00:26:21.985 "aliases": [ 00:26:21.985 "2d4288a9-9a13-46f0-bb90-520ff397d5e3" 00:26:21.985 ], 00:26:21.985 "product_name": "Raid Volume", 00:26:21.985 "block_size": 512, 00:26:21.985 "num_blocks": 126976, 00:26:21.985 "uuid": "2d4288a9-9a13-46f0-bb90-520ff397d5e3", 00:26:21.985 "assigned_rate_limits": { 00:26:21.985 "rw_ios_per_sec": 0, 00:26:21.985 "rw_mbytes_per_sec": 0, 00:26:21.985 "r_mbytes_per_sec": 0, 00:26:21.985 "w_mbytes_per_sec": 0 00:26:21.985 }, 00:26:21.985 "claimed": false, 00:26:21.985 "zoned": false, 00:26:21.985 "supported_io_types": { 00:26:21.985 "read": true, 00:26:21.985 "write": true, 00:26:21.985 "unmap": true, 00:26:21.985 "flush": true, 00:26:21.985 "reset": true, 00:26:21.985 "nvme_admin": false, 00:26:21.985 "nvme_io": false, 00:26:21.985 "nvme_io_md": false, 00:26:21.985 "write_zeroes": true, 00:26:21.985 "zcopy": false, 00:26:21.985 "get_zone_info": false, 00:26:21.985 "zone_management": false, 00:26:21.985 "zone_append": false, 00:26:21.985 "compare": false, 00:26:21.985 "compare_and_write": false, 00:26:21.986 "abort": false, 00:26:21.986 "seek_hole": false, 00:26:21.986 "seek_data": false, 00:26:21.986 "copy": false, 00:26:21.986 "nvme_iov_md": false 00:26:21.986 }, 00:26:21.986 "memory_domains": [ 00:26:21.986 { 00:26:21.986 "dma_device_id": "system", 00:26:21.986 "dma_device_type": 1 00:26:21.986 }, 00:26:21.986 { 00:26:21.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.986 "dma_device_type": 2 00:26:21.986 }, 00:26:21.986 { 00:26:21.986 "dma_device_id": "system", 00:26:21.986 "dma_device_type": 1 00:26:21.986 }, 00:26:21.986 { 00:26:21.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:21.986 "dma_device_type": 2 00:26:21.986 } 00:26:21.986 ], 00:26:21.986 "driver_specific": { 00:26:21.986 "raid": { 00:26:21.986 "uuid": "2d4288a9-9a13-46f0-bb90-520ff397d5e3", 00:26:21.986 "strip_size_kb": 64, 00:26:21.986 "state": "online", 00:26:21.986 "raid_level": "raid0", 00:26:21.986 "superblock": true, 00:26:21.986 "num_base_bdevs": 2, 00:26:21.986 "num_base_bdevs_discovered": 2, 00:26:21.986 "num_base_bdevs_operational": 2, 00:26:21.986 "base_bdevs_list": [ 00:26:21.986 { 00:26:21.986 "name": "pt1", 00:26:21.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:21.986 "is_configured": true, 00:26:21.986 "data_offset": 2048, 00:26:21.986 "data_size": 63488 00:26:21.986 }, 00:26:21.986 { 00:26:21.986 "name": "pt2", 00:26:21.986 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:21.986 "is_configured": true, 00:26:21.986 "data_offset": 2048, 00:26:21.986 "data_size": 63488 00:26:21.986 } 00:26:21.986 ] 00:26:21.986 } 00:26:21.986 } 00:26:21.986 }' 00:26:21.986 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:22.245 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:22.245 pt2' 00:26:22.245 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:22.245 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:22.245 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:22.245 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:22.245 18:24:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:22.245 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.245 18:24:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.245 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.245 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.246 [2024-12-06 18:24:53.097768] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=2d4288a9-9a13-46f0-bb90-520ff397d5e3 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 2d4288a9-9a13-46f0-bb90-520ff397d5e3 ']' 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.246 [2024-12-06 18:24:53.133430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.246 [2024-12-06 18:24:53.133459] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:22.246 [2024-12-06 18:24:53.133535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:22.246 [2024-12-06 18:24:53.133580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:22.246 [2024-12-06 18:24:53.133594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.246 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.506 [2024-12-06 18:24:53.261487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:22.506 [2024-12-06 18:24:53.263566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:22.506 [2024-12-06 18:24:53.263633] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:22.506 [2024-12-06 18:24:53.263681] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:22.506 [2024-12-06 18:24:53.263698] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:22.506 [2024-12-06 18:24:53.263714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:22.506 request: 00:26:22.506 { 00:26:22.506 "name": "raid_bdev1", 00:26:22.506 "raid_level": "raid0", 00:26:22.506 "base_bdevs": [ 00:26:22.506 "malloc1", 00:26:22.506 "malloc2" 00:26:22.506 ], 00:26:22.506 "strip_size_kb": 64, 00:26:22.506 "superblock": false, 00:26:22.506 "method": "bdev_raid_create", 00:26:22.506 "req_id": 1 00:26:22.506 } 00:26:22.506 Got JSON-RPC error response 00:26:22.506 response: 00:26:22.506 { 00:26:22.506 "code": -17, 00:26:22.506 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:22.506 } 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.506 [2024-12-06 18:24:53.321469] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:22.506 [2024-12-06 18:24:53.321527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:22.506 [2024-12-06 18:24:53.321546] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:22.506 [2024-12-06 18:24:53.321559] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:22.506 [2024-12-06 18:24:53.323957] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:22.506 [2024-12-06 18:24:53.324001] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:22.506 [2024-12-06 18:24:53.324077] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:22.506 [2024-12-06 18:24:53.324142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:22.506 pt1 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:22.506 "name": "raid_bdev1", 00:26:22.506 "uuid": "2d4288a9-9a13-46f0-bb90-520ff397d5e3", 00:26:22.506 "strip_size_kb": 64, 00:26:22.506 "state": "configuring", 00:26:22.506 "raid_level": "raid0", 00:26:22.506 "superblock": true, 00:26:22.506 "num_base_bdevs": 2, 00:26:22.506 "num_base_bdevs_discovered": 1, 00:26:22.506 "num_base_bdevs_operational": 2, 00:26:22.506 "base_bdevs_list": [ 00:26:22.506 { 00:26:22.506 "name": "pt1", 00:26:22.506 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:22.506 "is_configured": true, 00:26:22.506 "data_offset": 2048, 00:26:22.506 "data_size": 63488 00:26:22.506 }, 00:26:22.506 { 00:26:22.506 "name": null, 00:26:22.506 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:22.506 "is_configured": false, 00:26:22.506 "data_offset": 2048, 00:26:22.506 "data_size": 63488 00:26:22.506 } 00:26:22.506 ] 00:26:22.506 }' 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:22.506 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.075 [2024-12-06 18:24:53.749314] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:23.075 [2024-12-06 18:24:53.749400] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:23.075 [2024-12-06 18:24:53.749425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:23.075 [2024-12-06 18:24:53.749439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:23.075 [2024-12-06 18:24:53.749902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:23.075 [2024-12-06 18:24:53.749935] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:23.075 [2024-12-06 18:24:53.750017] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:23.075 [2024-12-06 18:24:53.750048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:23.075 [2024-12-06 18:24:53.750175] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:23.075 [2024-12-06 18:24:53.750189] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:23.075 [2024-12-06 18:24:53.750440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:23.075 [2024-12-06 18:24:53.750592] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:23.075 [2024-12-06 18:24:53.750601] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:23.075 [2024-12-06 18:24:53.750736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:23.075 pt2 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:23.075 "name": "raid_bdev1", 00:26:23.075 "uuid": "2d4288a9-9a13-46f0-bb90-520ff397d5e3", 00:26:23.075 "strip_size_kb": 64, 00:26:23.075 "state": "online", 00:26:23.075 "raid_level": "raid0", 00:26:23.075 "superblock": true, 00:26:23.075 "num_base_bdevs": 2, 00:26:23.075 "num_base_bdevs_discovered": 2, 00:26:23.075 "num_base_bdevs_operational": 2, 00:26:23.075 "base_bdevs_list": [ 00:26:23.075 { 00:26:23.075 "name": "pt1", 00:26:23.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:23.075 "is_configured": true, 00:26:23.075 "data_offset": 2048, 00:26:23.075 "data_size": 63488 00:26:23.075 }, 00:26:23.075 { 00:26:23.075 "name": "pt2", 00:26:23.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:23.075 "is_configured": true, 00:26:23.075 "data_offset": 2048, 00:26:23.075 "data_size": 63488 00:26:23.075 } 00:26:23.075 ] 00:26:23.075 }' 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:23.075 18:24:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.356 [2024-12-06 18:24:54.205398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:23.356 "name": "raid_bdev1", 00:26:23.356 "aliases": [ 00:26:23.356 "2d4288a9-9a13-46f0-bb90-520ff397d5e3" 00:26:23.356 ], 00:26:23.356 "product_name": "Raid Volume", 00:26:23.356 "block_size": 512, 00:26:23.356 "num_blocks": 126976, 00:26:23.356 "uuid": "2d4288a9-9a13-46f0-bb90-520ff397d5e3", 00:26:23.356 "assigned_rate_limits": { 00:26:23.356 "rw_ios_per_sec": 0, 00:26:23.356 "rw_mbytes_per_sec": 0, 00:26:23.356 "r_mbytes_per_sec": 0, 00:26:23.356 "w_mbytes_per_sec": 0 00:26:23.356 }, 00:26:23.356 "claimed": false, 00:26:23.356 "zoned": false, 00:26:23.356 "supported_io_types": { 00:26:23.356 "read": true, 00:26:23.356 "write": true, 00:26:23.356 "unmap": true, 00:26:23.356 "flush": true, 00:26:23.356 "reset": true, 00:26:23.356 "nvme_admin": false, 00:26:23.356 "nvme_io": false, 00:26:23.356 "nvme_io_md": false, 00:26:23.356 "write_zeroes": true, 00:26:23.356 "zcopy": false, 00:26:23.356 "get_zone_info": false, 00:26:23.356 "zone_management": false, 00:26:23.356 "zone_append": false, 00:26:23.356 "compare": false, 00:26:23.356 "compare_and_write": false, 00:26:23.356 "abort": false, 00:26:23.356 "seek_hole": false, 00:26:23.356 "seek_data": false, 00:26:23.356 "copy": false, 00:26:23.356 "nvme_iov_md": false 00:26:23.356 }, 00:26:23.356 "memory_domains": [ 00:26:23.356 { 00:26:23.356 "dma_device_id": "system", 00:26:23.356 "dma_device_type": 1 00:26:23.356 }, 00:26:23.356 { 00:26:23.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.356 "dma_device_type": 2 00:26:23.356 }, 00:26:23.356 { 00:26:23.356 "dma_device_id": "system", 00:26:23.356 "dma_device_type": 1 00:26:23.356 }, 00:26:23.356 { 00:26:23.356 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:23.356 "dma_device_type": 2 00:26:23.356 } 00:26:23.356 ], 00:26:23.356 "driver_specific": { 00:26:23.356 "raid": { 00:26:23.356 "uuid": "2d4288a9-9a13-46f0-bb90-520ff397d5e3", 00:26:23.356 "strip_size_kb": 64, 00:26:23.356 "state": "online", 00:26:23.356 "raid_level": "raid0", 00:26:23.356 "superblock": true, 00:26:23.356 "num_base_bdevs": 2, 00:26:23.356 "num_base_bdevs_discovered": 2, 00:26:23.356 "num_base_bdevs_operational": 2, 00:26:23.356 "base_bdevs_list": [ 00:26:23.356 { 00:26:23.356 "name": "pt1", 00:26:23.356 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:23.356 "is_configured": true, 00:26:23.356 "data_offset": 2048, 00:26:23.356 "data_size": 63488 00:26:23.356 }, 00:26:23.356 { 00:26:23.356 "name": "pt2", 00:26:23.356 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:23.356 "is_configured": true, 00:26:23.356 "data_offset": 2048, 00:26:23.356 "data_size": 63488 00:26:23.356 } 00:26:23.356 ] 00:26:23.356 } 00:26:23.356 } 00:26:23.356 }' 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:23.356 pt2' 00:26:23.356 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:23.632 [2024-12-06 18:24:54.413021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 2d4288a9-9a13-46f0-bb90-520ff397d5e3 '!=' 2d4288a9-9a13-46f0-bb90-520ff397d5e3 ']' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 60948 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60948 ']' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60948 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60948 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:23.632 killing process with pid 60948 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60948' 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 60948 00:26:23.632 [2024-12-06 18:24:54.491265] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:23.632 [2024-12-06 18:24:54.491348] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:23.632 [2024-12-06 18:24:54.491394] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:23.632 [2024-12-06 18:24:54.491408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:23.632 18:24:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 60948 00:26:23.891 [2024-12-06 18:24:54.698091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:25.269 18:24:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:25.269 00:26:25.269 real 0m4.378s 00:26:25.269 user 0m6.047s 00:26:25.269 sys 0m0.865s 00:26:25.269 18:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:25.269 18:24:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.269 ************************************ 00:26:25.269 END TEST raid_superblock_test 00:26:25.269 ************************************ 00:26:25.269 18:24:55 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:26:25.269 18:24:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:25.269 18:24:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:25.269 18:24:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:25.269 ************************************ 00:26:25.269 START TEST raid_read_error_test 00:26:25.269 ************************************ 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.fpriTrXOXX 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61154 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61154 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61154 ']' 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.269 18:24:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:25.269 [2024-12-06 18:24:56.016479] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:25.269 [2024-12-06 18:24:56.016619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61154 ] 00:26:25.269 [2024-12-06 18:24:56.195428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:25.526 [2024-12-06 18:24:56.307094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.784 [2024-12-06 18:24:56.520826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:25.785 [2024-12-06 18:24:56.520873] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.044 BaseBdev1_malloc 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.044 true 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.044 [2024-12-06 18:24:56.924315] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:26.044 [2024-12-06 18:24:56.924378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.044 [2024-12-06 18:24:56.924400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:26.044 [2024-12-06 18:24:56.924414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.044 [2024-12-06 18:24:56.926896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.044 [2024-12-06 18:24:56.926944] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:26.044 BaseBdev1 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.044 BaseBdev2_malloc 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.044 true 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.044 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.304 [2024-12-06 18:24:56.994373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:26.304 [2024-12-06 18:24:56.994433] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:26.304 [2024-12-06 18:24:56.994452] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:26.304 [2024-12-06 18:24:56.994467] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:26.304 [2024-12-06 18:24:56.996955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:26.304 [2024-12-06 18:24:56.996999] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:26.304 BaseBdev2 00:26:26.304 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.304 18:24:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:26:26.304 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.304 18:24:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.304 [2024-12-06 18:24:57.006416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:26.304 [2024-12-06 18:24:57.008522] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:26.304 [2024-12-06 18:24:57.008714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:26.304 [2024-12-06 18:24:57.008734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:26.304 [2024-12-06 18:24:57.008980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:26.304 [2024-12-06 18:24:57.009171] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:26.304 [2024-12-06 18:24:57.009190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:26.304 [2024-12-06 18:24:57.009389] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:26.304 "name": "raid_bdev1", 00:26:26.304 "uuid": "85fb4c54-c1f2-43d5-a128-de85cf42a41b", 00:26:26.304 "strip_size_kb": 64, 00:26:26.304 "state": "online", 00:26:26.304 "raid_level": "raid0", 00:26:26.304 "superblock": true, 00:26:26.304 "num_base_bdevs": 2, 00:26:26.304 "num_base_bdevs_discovered": 2, 00:26:26.304 "num_base_bdevs_operational": 2, 00:26:26.304 "base_bdevs_list": [ 00:26:26.304 { 00:26:26.304 "name": "BaseBdev1", 00:26:26.304 "uuid": "862d395a-7030-5dbd-b73f-18ac9a4e7ae6", 00:26:26.304 "is_configured": true, 00:26:26.304 "data_offset": 2048, 00:26:26.304 "data_size": 63488 00:26:26.304 }, 00:26:26.304 { 00:26:26.304 "name": "BaseBdev2", 00:26:26.304 "uuid": "2a794f05-cd8a-56b8-8ff9-1229c3c306f9", 00:26:26.304 "is_configured": true, 00:26:26.304 "data_offset": 2048, 00:26:26.304 "data_size": 63488 00:26:26.304 } 00:26:26.304 ] 00:26:26.304 }' 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:26.304 18:24:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:26.563 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:26.563 18:24:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:26.820 [2024-12-06 18:24:57.522978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:27.754 "name": "raid_bdev1", 00:26:27.754 "uuid": "85fb4c54-c1f2-43d5-a128-de85cf42a41b", 00:26:27.754 "strip_size_kb": 64, 00:26:27.754 "state": "online", 00:26:27.754 "raid_level": "raid0", 00:26:27.754 "superblock": true, 00:26:27.754 "num_base_bdevs": 2, 00:26:27.754 "num_base_bdevs_discovered": 2, 00:26:27.754 "num_base_bdevs_operational": 2, 00:26:27.754 "base_bdevs_list": [ 00:26:27.754 { 00:26:27.754 "name": "BaseBdev1", 00:26:27.754 "uuid": "862d395a-7030-5dbd-b73f-18ac9a4e7ae6", 00:26:27.754 "is_configured": true, 00:26:27.754 "data_offset": 2048, 00:26:27.754 "data_size": 63488 00:26:27.754 }, 00:26:27.754 { 00:26:27.754 "name": "BaseBdev2", 00:26:27.754 "uuid": "2a794f05-cd8a-56b8-8ff9-1229c3c306f9", 00:26:27.754 "is_configured": true, 00:26:27.754 "data_offset": 2048, 00:26:27.754 "data_size": 63488 00:26:27.754 } 00:26:27.754 ] 00:26:27.754 }' 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:27.754 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:28.012 [2024-12-06 18:24:58.873977] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:28.012 [2024-12-06 18:24:58.874025] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:28.012 [2024-12-06 18:24:58.876872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:28.012 [2024-12-06 18:24:58.876922] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:28.012 [2024-12-06 18:24:58.876953] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:28.012 [2024-12-06 18:24:58.876968] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:28.012 { 00:26:28.012 "results": [ 00:26:28.012 { 00:26:28.012 "job": "raid_bdev1", 00:26:28.012 "core_mask": "0x1", 00:26:28.012 "workload": "randrw", 00:26:28.012 "percentage": 50, 00:26:28.012 "status": "finished", 00:26:28.012 "queue_depth": 1, 00:26:28.012 "io_size": 131072, 00:26:28.012 "runtime": 1.351215, 00:26:28.012 "iops": 15605.214566149725, 00:26:28.012 "mibps": 1950.6518207687157, 00:26:28.012 "io_failed": 1, 00:26:28.012 "io_timeout": 0, 00:26:28.012 "avg_latency_us": 88.26179855762977, 00:26:28.012 "min_latency_us": 27.347791164658634, 00:26:28.012 "max_latency_us": 1500.2216867469879 00:26:28.012 } 00:26:28.012 ], 00:26:28.012 "core_count": 1 00:26:28.012 } 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61154 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61154 ']' 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61154 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61154 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:28.012 killing process with pid 61154 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61154' 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61154 00:26:28.012 [2024-12-06 18:24:58.929462] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:28.012 18:24:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61154 00:26:28.270 [2024-12-06 18:24:59.067806] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.fpriTrXOXX 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:26:29.684 00:26:29.684 real 0m4.395s 00:26:29.684 user 0m5.229s 00:26:29.684 sys 0m0.583s 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:29.684 ************************************ 00:26:29.684 END TEST raid_read_error_test 00:26:29.684 ************************************ 00:26:29.684 18:25:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.684 18:25:00 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:26:29.684 18:25:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:29.684 18:25:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:29.684 18:25:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:29.684 ************************************ 00:26:29.684 START TEST raid_write_error_test 00:26:29.684 ************************************ 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hz2vx5omvL 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61294 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61294 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61294 ']' 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:29.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:29.684 18:25:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:29.684 [2024-12-06 18:25:00.492925] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:29.684 [2024-12-06 18:25:00.493050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61294 ] 00:26:29.941 [2024-12-06 18:25:00.672873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.941 [2024-12-06 18:25:00.791842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.200 [2024-12-06 18:25:01.005313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:30.200 [2024-12-06 18:25:01.005372] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:30.458 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:30.458 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.459 BaseBdev1_malloc 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.459 true 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.459 [2024-12-06 18:25:01.391322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:30.459 [2024-12-06 18:25:01.391386] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.459 [2024-12-06 18:25:01.391411] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:30.459 [2024-12-06 18:25:01.391426] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.459 [2024-12-06 18:25:01.393937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.459 [2024-12-06 18:25:01.393988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:30.459 BaseBdev1 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.459 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.718 BaseBdev2_malloc 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.718 true 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.718 [2024-12-06 18:25:01.460385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:30.718 [2024-12-06 18:25:01.460448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:30.718 [2024-12-06 18:25:01.460467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:30.718 [2024-12-06 18:25:01.460481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:30.718 [2024-12-06 18:25:01.462837] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:30.718 [2024-12-06 18:25:01.462883] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:30.718 BaseBdev2 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.718 [2024-12-06 18:25:01.472451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:30.718 [2024-12-06 18:25:01.474609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:30.718 [2024-12-06 18:25:01.474799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:30.718 [2024-12-06 18:25:01.474820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:30.718 [2024-12-06 18:25:01.475069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:30.718 [2024-12-06 18:25:01.475252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:30.718 [2024-12-06 18:25:01.475267] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:30.718 [2024-12-06 18:25:01.475426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:30.718 "name": "raid_bdev1", 00:26:30.718 "uuid": "a07e41ba-4885-4a10-b47b-9e16534bafc6", 00:26:30.718 "strip_size_kb": 64, 00:26:30.718 "state": "online", 00:26:30.718 "raid_level": "raid0", 00:26:30.718 "superblock": true, 00:26:30.718 "num_base_bdevs": 2, 00:26:30.718 "num_base_bdevs_discovered": 2, 00:26:30.718 "num_base_bdevs_operational": 2, 00:26:30.718 "base_bdevs_list": [ 00:26:30.718 { 00:26:30.718 "name": "BaseBdev1", 00:26:30.718 "uuid": "9e28ac67-8b16-5aaf-ae5f-fa460ba6154e", 00:26:30.718 "is_configured": true, 00:26:30.718 "data_offset": 2048, 00:26:30.718 "data_size": 63488 00:26:30.718 }, 00:26:30.718 { 00:26:30.718 "name": "BaseBdev2", 00:26:30.718 "uuid": "e1b1cd9c-8da5-5a94-bd5f-25d97301ee21", 00:26:30.718 "is_configured": true, 00:26:30.718 "data_offset": 2048, 00:26:30.718 "data_size": 63488 00:26:30.718 } 00:26:30.718 ] 00:26:30.718 }' 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:30.718 18:25:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:30.977 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:30.977 18:25:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:31.236 [2024-12-06 18:25:02.017118] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:32.175 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:32.175 18:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.175 18:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.175 18:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.175 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:32.175 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:32.176 "name": "raid_bdev1", 00:26:32.176 "uuid": "a07e41ba-4885-4a10-b47b-9e16534bafc6", 00:26:32.176 "strip_size_kb": 64, 00:26:32.176 "state": "online", 00:26:32.176 "raid_level": "raid0", 00:26:32.176 "superblock": true, 00:26:32.176 "num_base_bdevs": 2, 00:26:32.176 "num_base_bdevs_discovered": 2, 00:26:32.176 "num_base_bdevs_operational": 2, 00:26:32.176 "base_bdevs_list": [ 00:26:32.176 { 00:26:32.176 "name": "BaseBdev1", 00:26:32.176 "uuid": "9e28ac67-8b16-5aaf-ae5f-fa460ba6154e", 00:26:32.176 "is_configured": true, 00:26:32.176 "data_offset": 2048, 00:26:32.176 "data_size": 63488 00:26:32.176 }, 00:26:32.176 { 00:26:32.176 "name": "BaseBdev2", 00:26:32.176 "uuid": "e1b1cd9c-8da5-5a94-bd5f-25d97301ee21", 00:26:32.176 "is_configured": true, 00:26:32.176 "data_offset": 2048, 00:26:32.176 "data_size": 63488 00:26:32.176 } 00:26:32.176 ] 00:26:32.176 }' 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:32.176 18:25:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:32.435 [2024-12-06 18:25:03.352280] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:32.435 [2024-12-06 18:25:03.352324] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:32.435 [2024-12-06 18:25:03.355246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:32.435 [2024-12-06 18:25:03.355298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:32.435 [2024-12-06 18:25:03.355332] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:32.435 [2024-12-06 18:25:03.355347] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:32.435 { 00:26:32.435 "results": [ 00:26:32.435 { 00:26:32.435 "job": "raid_bdev1", 00:26:32.435 "core_mask": "0x1", 00:26:32.435 "workload": "randrw", 00:26:32.435 "percentage": 50, 00:26:32.435 "status": "finished", 00:26:32.435 "queue_depth": 1, 00:26:32.435 "io_size": 131072, 00:26:32.435 "runtime": 1.335222, 00:26:32.435 "iops": 15270.868814324509, 00:26:32.435 "mibps": 1908.8586017905636, 00:26:32.435 "io_failed": 1, 00:26:32.435 "io_timeout": 0, 00:26:32.435 "avg_latency_us": 90.19015752086862, 00:26:32.435 "min_latency_us": 27.553413654618474, 00:26:32.435 "max_latency_us": 1487.0618473895581 00:26:32.435 } 00:26:32.435 ], 00:26:32.435 "core_count": 1 00:26:32.435 } 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61294 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61294 ']' 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61294 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.435 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61294 00:26:32.694 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.694 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.694 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61294' 00:26:32.694 killing process with pid 61294 00:26:32.694 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61294 00:26:32.694 [2024-12-06 18:25:03.404779] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:32.694 18:25:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61294 00:26:32.694 [2024-12-06 18:25:03.548026] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hz2vx5omvL 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:26:34.071 00:26:34.071 real 0m4.435s 00:26:34.071 user 0m5.217s 00:26:34.071 sys 0m0.627s 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.071 18:25:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.071 ************************************ 00:26:34.071 END TEST raid_write_error_test 00:26:34.071 ************************************ 00:26:34.071 18:25:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:26:34.071 18:25:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:26:34.071 18:25:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:34.071 18:25:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:34.071 18:25:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:34.071 ************************************ 00:26:34.071 START TEST raid_state_function_test 00:26:34.071 ************************************ 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61438 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:34.071 Process raid pid: 61438 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61438' 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61438 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61438 ']' 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:34.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:34.071 18:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:34.072 18:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:34.072 18:25:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:34.072 [2024-12-06 18:25:05.000781] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:34.072 [2024-12-06 18:25:05.000924] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:34.330 [2024-12-06 18:25:05.188393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:34.588 [2024-12-06 18:25:05.307798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:34.588 [2024-12-06 18:25:05.521946] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:34.588 [2024-12-06 18:25:05.521995] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.154 [2024-12-06 18:25:05.857737] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:35.154 [2024-12-06 18:25:05.857808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:35.154 [2024-12-06 18:25:05.857821] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:35.154 [2024-12-06 18:25:05.857837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.154 "name": "Existed_Raid", 00:26:35.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.154 "strip_size_kb": 64, 00:26:35.154 "state": "configuring", 00:26:35.154 "raid_level": "concat", 00:26:35.154 "superblock": false, 00:26:35.154 "num_base_bdevs": 2, 00:26:35.154 "num_base_bdevs_discovered": 0, 00:26:35.154 "num_base_bdevs_operational": 2, 00:26:35.154 "base_bdevs_list": [ 00:26:35.154 { 00:26:35.154 "name": "BaseBdev1", 00:26:35.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.154 "is_configured": false, 00:26:35.154 "data_offset": 0, 00:26:35.154 "data_size": 0 00:26:35.154 }, 00:26:35.154 { 00:26:35.154 "name": "BaseBdev2", 00:26:35.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.154 "is_configured": false, 00:26:35.154 "data_offset": 0, 00:26:35.154 "data_size": 0 00:26:35.154 } 00:26:35.154 ] 00:26:35.154 }' 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.154 18:25:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.413 [2024-12-06 18:25:06.297516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:35.413 [2024-12-06 18:25:06.297560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.413 [2024-12-06 18:25:06.309511] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:35.413 [2024-12-06 18:25:06.309565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:35.413 [2024-12-06 18:25:06.309589] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:35.413 [2024-12-06 18:25:06.309605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.413 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.413 [2024-12-06 18:25:06.359813] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:35.413 BaseBdev1 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.673 [ 00:26:35.673 { 00:26:35.673 "name": "BaseBdev1", 00:26:35.673 "aliases": [ 00:26:35.673 "bc82f56e-e8ab-47c1-8cc4-14056d232b03" 00:26:35.673 ], 00:26:35.673 "product_name": "Malloc disk", 00:26:35.673 "block_size": 512, 00:26:35.673 "num_blocks": 65536, 00:26:35.673 "uuid": "bc82f56e-e8ab-47c1-8cc4-14056d232b03", 00:26:35.673 "assigned_rate_limits": { 00:26:35.673 "rw_ios_per_sec": 0, 00:26:35.673 "rw_mbytes_per_sec": 0, 00:26:35.673 "r_mbytes_per_sec": 0, 00:26:35.673 "w_mbytes_per_sec": 0 00:26:35.673 }, 00:26:35.673 "claimed": true, 00:26:35.673 "claim_type": "exclusive_write", 00:26:35.673 "zoned": false, 00:26:35.673 "supported_io_types": { 00:26:35.673 "read": true, 00:26:35.673 "write": true, 00:26:35.673 "unmap": true, 00:26:35.673 "flush": true, 00:26:35.673 "reset": true, 00:26:35.673 "nvme_admin": false, 00:26:35.673 "nvme_io": false, 00:26:35.673 "nvme_io_md": false, 00:26:35.673 "write_zeroes": true, 00:26:35.673 "zcopy": true, 00:26:35.673 "get_zone_info": false, 00:26:35.673 "zone_management": false, 00:26:35.673 "zone_append": false, 00:26:35.673 "compare": false, 00:26:35.673 "compare_and_write": false, 00:26:35.673 "abort": true, 00:26:35.673 "seek_hole": false, 00:26:35.673 "seek_data": false, 00:26:35.673 "copy": true, 00:26:35.673 "nvme_iov_md": false 00:26:35.673 }, 00:26:35.673 "memory_domains": [ 00:26:35.673 { 00:26:35.673 "dma_device_id": "system", 00:26:35.673 "dma_device_type": 1 00:26:35.673 }, 00:26:35.673 { 00:26:35.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:35.673 "dma_device_type": 2 00:26:35.673 } 00:26:35.673 ], 00:26:35.673 "driver_specific": {} 00:26:35.673 } 00:26:35.673 ] 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.673 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:35.673 "name": "Existed_Raid", 00:26:35.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.673 "strip_size_kb": 64, 00:26:35.673 "state": "configuring", 00:26:35.673 "raid_level": "concat", 00:26:35.673 "superblock": false, 00:26:35.673 "num_base_bdevs": 2, 00:26:35.673 "num_base_bdevs_discovered": 1, 00:26:35.673 "num_base_bdevs_operational": 2, 00:26:35.673 "base_bdevs_list": [ 00:26:35.673 { 00:26:35.673 "name": "BaseBdev1", 00:26:35.673 "uuid": "bc82f56e-e8ab-47c1-8cc4-14056d232b03", 00:26:35.673 "is_configured": true, 00:26:35.673 "data_offset": 0, 00:26:35.673 "data_size": 65536 00:26:35.673 }, 00:26:35.673 { 00:26:35.673 "name": "BaseBdev2", 00:26:35.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:35.673 "is_configured": false, 00:26:35.673 "data_offset": 0, 00:26:35.673 "data_size": 0 00:26:35.673 } 00:26:35.673 ] 00:26:35.673 }' 00:26:35.674 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:35.674 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.934 [2024-12-06 18:25:06.863215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:35.934 [2024-12-06 18:25:06.863277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:35.934 [2024-12-06 18:25:06.875221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:35.934 [2024-12-06 18:25:06.877374] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:35.934 [2024-12-06 18:25:06.877425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:35.934 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:36.193 "name": "Existed_Raid", 00:26:36.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.193 "strip_size_kb": 64, 00:26:36.193 "state": "configuring", 00:26:36.193 "raid_level": "concat", 00:26:36.193 "superblock": false, 00:26:36.193 "num_base_bdevs": 2, 00:26:36.193 "num_base_bdevs_discovered": 1, 00:26:36.193 "num_base_bdevs_operational": 2, 00:26:36.193 "base_bdevs_list": [ 00:26:36.193 { 00:26:36.193 "name": "BaseBdev1", 00:26:36.193 "uuid": "bc82f56e-e8ab-47c1-8cc4-14056d232b03", 00:26:36.193 "is_configured": true, 00:26:36.193 "data_offset": 0, 00:26:36.193 "data_size": 65536 00:26:36.193 }, 00:26:36.193 { 00:26:36.193 "name": "BaseBdev2", 00:26:36.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:36.193 "is_configured": false, 00:26:36.193 "data_offset": 0, 00:26:36.193 "data_size": 0 00:26:36.193 } 00:26:36.193 ] 00:26:36.193 }' 00:26:36.193 18:25:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:36.194 18:25:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.453 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:36.453 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.453 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.453 [2024-12-06 18:25:07.374912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:36.453 [2024-12-06 18:25:07.374971] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:36.453 [2024-12-06 18:25:07.374981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:26:36.453 [2024-12-06 18:25:07.375340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:36.453 [2024-12-06 18:25:07.375507] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:36.454 [2024-12-06 18:25:07.375522] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:36.454 [2024-12-06 18:25:07.375825] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:36.454 BaseBdev2 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.454 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.454 [ 00:26:36.454 { 00:26:36.454 "name": "BaseBdev2", 00:26:36.454 "aliases": [ 00:26:36.713 "53832464-1f84-4785-a284-1d374b59526e" 00:26:36.713 ], 00:26:36.713 "product_name": "Malloc disk", 00:26:36.713 "block_size": 512, 00:26:36.713 "num_blocks": 65536, 00:26:36.713 "uuid": "53832464-1f84-4785-a284-1d374b59526e", 00:26:36.713 "assigned_rate_limits": { 00:26:36.713 "rw_ios_per_sec": 0, 00:26:36.713 "rw_mbytes_per_sec": 0, 00:26:36.713 "r_mbytes_per_sec": 0, 00:26:36.713 "w_mbytes_per_sec": 0 00:26:36.713 }, 00:26:36.713 "claimed": true, 00:26:36.713 "claim_type": "exclusive_write", 00:26:36.713 "zoned": false, 00:26:36.713 "supported_io_types": { 00:26:36.713 "read": true, 00:26:36.713 "write": true, 00:26:36.713 "unmap": true, 00:26:36.713 "flush": true, 00:26:36.713 "reset": true, 00:26:36.713 "nvme_admin": false, 00:26:36.713 "nvme_io": false, 00:26:36.713 "nvme_io_md": false, 00:26:36.713 "write_zeroes": true, 00:26:36.713 "zcopy": true, 00:26:36.713 "get_zone_info": false, 00:26:36.713 "zone_management": false, 00:26:36.713 "zone_append": false, 00:26:36.713 "compare": false, 00:26:36.713 "compare_and_write": false, 00:26:36.713 "abort": true, 00:26:36.713 "seek_hole": false, 00:26:36.713 "seek_data": false, 00:26:36.713 "copy": true, 00:26:36.713 "nvme_iov_md": false 00:26:36.713 }, 00:26:36.713 "memory_domains": [ 00:26:36.713 { 00:26:36.713 "dma_device_id": "system", 00:26:36.713 "dma_device_type": 1 00:26:36.713 }, 00:26:36.713 { 00:26:36.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.713 "dma_device_type": 2 00:26:36.713 } 00:26:36.713 ], 00:26:36.713 "driver_specific": {} 00:26:36.713 } 00:26:36.713 ] 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:36.713 "name": "Existed_Raid", 00:26:36.713 "uuid": "5c59b1a9-b6f9-493e-a2d8-de99929e0e1d", 00:26:36.713 "strip_size_kb": 64, 00:26:36.713 "state": "online", 00:26:36.713 "raid_level": "concat", 00:26:36.713 "superblock": false, 00:26:36.713 "num_base_bdevs": 2, 00:26:36.713 "num_base_bdevs_discovered": 2, 00:26:36.713 "num_base_bdevs_operational": 2, 00:26:36.713 "base_bdevs_list": [ 00:26:36.713 { 00:26:36.713 "name": "BaseBdev1", 00:26:36.713 "uuid": "bc82f56e-e8ab-47c1-8cc4-14056d232b03", 00:26:36.713 "is_configured": true, 00:26:36.713 "data_offset": 0, 00:26:36.713 "data_size": 65536 00:26:36.713 }, 00:26:36.713 { 00:26:36.713 "name": "BaseBdev2", 00:26:36.713 "uuid": "53832464-1f84-4785-a284-1d374b59526e", 00:26:36.713 "is_configured": true, 00:26:36.713 "data_offset": 0, 00:26:36.713 "data_size": 65536 00:26:36.713 } 00:26:36.713 ] 00:26:36.713 }' 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:36.713 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:36.972 [2024-12-06 18:25:07.830647] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:36.972 "name": "Existed_Raid", 00:26:36.972 "aliases": [ 00:26:36.972 "5c59b1a9-b6f9-493e-a2d8-de99929e0e1d" 00:26:36.972 ], 00:26:36.972 "product_name": "Raid Volume", 00:26:36.972 "block_size": 512, 00:26:36.972 "num_blocks": 131072, 00:26:36.972 "uuid": "5c59b1a9-b6f9-493e-a2d8-de99929e0e1d", 00:26:36.972 "assigned_rate_limits": { 00:26:36.972 "rw_ios_per_sec": 0, 00:26:36.972 "rw_mbytes_per_sec": 0, 00:26:36.972 "r_mbytes_per_sec": 0, 00:26:36.972 "w_mbytes_per_sec": 0 00:26:36.972 }, 00:26:36.972 "claimed": false, 00:26:36.972 "zoned": false, 00:26:36.972 "supported_io_types": { 00:26:36.972 "read": true, 00:26:36.972 "write": true, 00:26:36.972 "unmap": true, 00:26:36.972 "flush": true, 00:26:36.972 "reset": true, 00:26:36.972 "nvme_admin": false, 00:26:36.972 "nvme_io": false, 00:26:36.972 "nvme_io_md": false, 00:26:36.972 "write_zeroes": true, 00:26:36.972 "zcopy": false, 00:26:36.972 "get_zone_info": false, 00:26:36.972 "zone_management": false, 00:26:36.972 "zone_append": false, 00:26:36.972 "compare": false, 00:26:36.972 "compare_and_write": false, 00:26:36.972 "abort": false, 00:26:36.972 "seek_hole": false, 00:26:36.972 "seek_data": false, 00:26:36.972 "copy": false, 00:26:36.972 "nvme_iov_md": false 00:26:36.972 }, 00:26:36.972 "memory_domains": [ 00:26:36.972 { 00:26:36.972 "dma_device_id": "system", 00:26:36.972 "dma_device_type": 1 00:26:36.972 }, 00:26:36.972 { 00:26:36.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.972 "dma_device_type": 2 00:26:36.972 }, 00:26:36.972 { 00:26:36.972 "dma_device_id": "system", 00:26:36.972 "dma_device_type": 1 00:26:36.972 }, 00:26:36.972 { 00:26:36.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:36.972 "dma_device_type": 2 00:26:36.972 } 00:26:36.972 ], 00:26:36.972 "driver_specific": { 00:26:36.972 "raid": { 00:26:36.972 "uuid": "5c59b1a9-b6f9-493e-a2d8-de99929e0e1d", 00:26:36.972 "strip_size_kb": 64, 00:26:36.972 "state": "online", 00:26:36.972 "raid_level": "concat", 00:26:36.972 "superblock": false, 00:26:36.972 "num_base_bdevs": 2, 00:26:36.972 "num_base_bdevs_discovered": 2, 00:26:36.972 "num_base_bdevs_operational": 2, 00:26:36.972 "base_bdevs_list": [ 00:26:36.972 { 00:26:36.972 "name": "BaseBdev1", 00:26:36.972 "uuid": "bc82f56e-e8ab-47c1-8cc4-14056d232b03", 00:26:36.972 "is_configured": true, 00:26:36.972 "data_offset": 0, 00:26:36.972 "data_size": 65536 00:26:36.972 }, 00:26:36.972 { 00:26:36.972 "name": "BaseBdev2", 00:26:36.972 "uuid": "53832464-1f84-4785-a284-1d374b59526e", 00:26:36.972 "is_configured": true, 00:26:36.972 "data_offset": 0, 00:26:36.972 "data_size": 65536 00:26:36.972 } 00:26:36.972 ] 00:26:36.972 } 00:26:36.972 } 00:26:36.972 }' 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:36.972 BaseBdev2' 00:26:36.972 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:37.233 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:37.233 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:37.233 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:37.233 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.233 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.233 18:25:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:37.233 18:25:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.233 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.233 [2024-12-06 18:25:08.074118] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:37.233 [2024-12-06 18:25:08.074175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:37.233 [2024-12-06 18:25:08.074232] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:37.500 "name": "Existed_Raid", 00:26:37.500 "uuid": "5c59b1a9-b6f9-493e-a2d8-de99929e0e1d", 00:26:37.500 "strip_size_kb": 64, 00:26:37.500 "state": "offline", 00:26:37.500 "raid_level": "concat", 00:26:37.500 "superblock": false, 00:26:37.500 "num_base_bdevs": 2, 00:26:37.500 "num_base_bdevs_discovered": 1, 00:26:37.500 "num_base_bdevs_operational": 1, 00:26:37.500 "base_bdevs_list": [ 00:26:37.500 { 00:26:37.500 "name": null, 00:26:37.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:37.500 "is_configured": false, 00:26:37.500 "data_offset": 0, 00:26:37.500 "data_size": 65536 00:26:37.500 }, 00:26:37.500 { 00:26:37.500 "name": "BaseBdev2", 00:26:37.500 "uuid": "53832464-1f84-4785-a284-1d374b59526e", 00:26:37.500 "is_configured": true, 00:26:37.500 "data_offset": 0, 00:26:37.500 "data_size": 65536 00:26:37.500 } 00:26:37.500 ] 00:26:37.500 }' 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:37.500 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.759 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:37.759 [2024-12-06 18:25:08.604630] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:37.759 [2024-12-06 18:25:08.604882] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61438 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61438 ']' 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61438 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61438 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:38.018 killing process with pid 61438 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61438' 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61438 00:26:38.018 [2024-12-06 18:25:08.800567] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:38.018 18:25:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61438 00:26:38.018 [2024-12-06 18:25:08.819356] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:26:39.398 00:26:39.398 real 0m5.147s 00:26:39.398 user 0m7.308s 00:26:39.398 sys 0m0.948s 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:39.398 ************************************ 00:26:39.398 END TEST raid_state_function_test 00:26:39.398 ************************************ 00:26:39.398 18:25:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:26:39.398 18:25:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:39.398 18:25:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.398 18:25:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:39.398 ************************************ 00:26:39.398 START TEST raid_state_function_test_sb 00:26:39.398 ************************************ 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61691 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:39.398 Process raid pid: 61691 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61691' 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61691 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61691 ']' 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:39.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:39.398 18:25:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:39.398 [2024-12-06 18:25:10.229587] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:39.398 [2024-12-06 18:25:10.229731] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:39.657 [2024-12-06 18:25:10.414447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.657 [2024-12-06 18:25:10.541697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:39.917 [2024-12-06 18:25:10.764002] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:39.917 [2024-12-06 18:25:10.764052] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.177 [2024-12-06 18:25:11.104706] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:40.177 [2024-12-06 18:25:11.104776] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:40.177 [2024-12-06 18:25:11.104790] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:40.177 [2024-12-06 18:25:11.104804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.177 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.437 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.437 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:40.437 "name": "Existed_Raid", 00:26:40.437 "uuid": "db61e13b-98d4-4308-a55d-bc24bf01134c", 00:26:40.437 "strip_size_kb": 64, 00:26:40.437 "state": "configuring", 00:26:40.437 "raid_level": "concat", 00:26:40.437 "superblock": true, 00:26:40.437 "num_base_bdevs": 2, 00:26:40.437 "num_base_bdevs_discovered": 0, 00:26:40.437 "num_base_bdevs_operational": 2, 00:26:40.437 "base_bdevs_list": [ 00:26:40.437 { 00:26:40.437 "name": "BaseBdev1", 00:26:40.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.437 "is_configured": false, 00:26:40.437 "data_offset": 0, 00:26:40.437 "data_size": 0 00:26:40.437 }, 00:26:40.437 { 00:26:40.437 "name": "BaseBdev2", 00:26:40.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.437 "is_configured": false, 00:26:40.437 "data_offset": 0, 00:26:40.437 "data_size": 0 00:26:40.437 } 00:26:40.437 ] 00:26:40.437 }' 00:26:40.437 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:40.437 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.696 [2024-12-06 18:25:11.556064] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:40.696 [2024-12-06 18:25:11.556114] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.696 [2024-12-06 18:25:11.568071] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:40.696 [2024-12-06 18:25:11.568137] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:40.696 [2024-12-06 18:25:11.568164] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:40.696 [2024-12-06 18:25:11.568182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.696 [2024-12-06 18:25:11.619789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:40.696 BaseBdev1 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.696 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.954 [ 00:26:40.954 { 00:26:40.954 "name": "BaseBdev1", 00:26:40.954 "aliases": [ 00:26:40.954 "d5ff378b-65bf-4241-b3b4-6a95ca4fadad" 00:26:40.954 ], 00:26:40.954 "product_name": "Malloc disk", 00:26:40.954 "block_size": 512, 00:26:40.954 "num_blocks": 65536, 00:26:40.954 "uuid": "d5ff378b-65bf-4241-b3b4-6a95ca4fadad", 00:26:40.954 "assigned_rate_limits": { 00:26:40.954 "rw_ios_per_sec": 0, 00:26:40.954 "rw_mbytes_per_sec": 0, 00:26:40.954 "r_mbytes_per_sec": 0, 00:26:40.954 "w_mbytes_per_sec": 0 00:26:40.954 }, 00:26:40.954 "claimed": true, 00:26:40.954 "claim_type": "exclusive_write", 00:26:40.954 "zoned": false, 00:26:40.954 "supported_io_types": { 00:26:40.954 "read": true, 00:26:40.954 "write": true, 00:26:40.954 "unmap": true, 00:26:40.954 "flush": true, 00:26:40.954 "reset": true, 00:26:40.954 "nvme_admin": false, 00:26:40.954 "nvme_io": false, 00:26:40.954 "nvme_io_md": false, 00:26:40.954 "write_zeroes": true, 00:26:40.954 "zcopy": true, 00:26:40.954 "get_zone_info": false, 00:26:40.954 "zone_management": false, 00:26:40.954 "zone_append": false, 00:26:40.954 "compare": false, 00:26:40.954 "compare_and_write": false, 00:26:40.954 "abort": true, 00:26:40.954 "seek_hole": false, 00:26:40.954 "seek_data": false, 00:26:40.954 "copy": true, 00:26:40.954 "nvme_iov_md": false 00:26:40.954 }, 00:26:40.954 "memory_domains": [ 00:26:40.954 { 00:26:40.954 "dma_device_id": "system", 00:26:40.954 "dma_device_type": 1 00:26:40.954 }, 00:26:40.954 { 00:26:40.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:40.954 "dma_device_type": 2 00:26:40.954 } 00:26:40.954 ], 00:26:40.954 "driver_specific": {} 00:26:40.955 } 00:26:40.955 ] 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:40.955 "name": "Existed_Raid", 00:26:40.955 "uuid": "d4d198c0-bea7-40d6-8c5b-eb08a3bc9db9", 00:26:40.955 "strip_size_kb": 64, 00:26:40.955 "state": "configuring", 00:26:40.955 "raid_level": "concat", 00:26:40.955 "superblock": true, 00:26:40.955 "num_base_bdevs": 2, 00:26:40.955 "num_base_bdevs_discovered": 1, 00:26:40.955 "num_base_bdevs_operational": 2, 00:26:40.955 "base_bdevs_list": [ 00:26:40.955 { 00:26:40.955 "name": "BaseBdev1", 00:26:40.955 "uuid": "d5ff378b-65bf-4241-b3b4-6a95ca4fadad", 00:26:40.955 "is_configured": true, 00:26:40.955 "data_offset": 2048, 00:26:40.955 "data_size": 63488 00:26:40.955 }, 00:26:40.955 { 00:26:40.955 "name": "BaseBdev2", 00:26:40.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:40.955 "is_configured": false, 00:26:40.955 "data_offset": 0, 00:26:40.955 "data_size": 0 00:26:40.955 } 00:26:40.955 ] 00:26:40.955 }' 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:40.955 18:25:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.213 [2024-12-06 18:25:12.079272] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:41.213 [2024-12-06 18:25:12.079339] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.213 [2024-12-06 18:25:12.091347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:41.213 [2024-12-06 18:25:12.093642] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:41.213 [2024-12-06 18:25:12.093699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:41.213 "name": "Existed_Raid", 00:26:41.213 "uuid": "ae6c4ef2-ca73-4c3e-acf4-08cb724313dd", 00:26:41.213 "strip_size_kb": 64, 00:26:41.213 "state": "configuring", 00:26:41.213 "raid_level": "concat", 00:26:41.213 "superblock": true, 00:26:41.213 "num_base_bdevs": 2, 00:26:41.213 "num_base_bdevs_discovered": 1, 00:26:41.213 "num_base_bdevs_operational": 2, 00:26:41.213 "base_bdevs_list": [ 00:26:41.213 { 00:26:41.213 "name": "BaseBdev1", 00:26:41.213 "uuid": "d5ff378b-65bf-4241-b3b4-6a95ca4fadad", 00:26:41.213 "is_configured": true, 00:26:41.213 "data_offset": 2048, 00:26:41.213 "data_size": 63488 00:26:41.213 }, 00:26:41.213 { 00:26:41.213 "name": "BaseBdev2", 00:26:41.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:41.213 "is_configured": false, 00:26:41.213 "data_offset": 0, 00:26:41.213 "data_size": 0 00:26:41.213 } 00:26:41.213 ] 00:26:41.213 }' 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:41.213 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.781 [2024-12-06 18:25:12.584747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:41.781 BaseBdev2 00:26:41.781 [2024-12-06 18:25:12.585027] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:41.781 [2024-12-06 18:25:12.585044] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:41.781 [2024-12-06 18:25:12.585365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:41.781 [2024-12-06 18:25:12.585565] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:41.781 [2024-12-06 18:25:12.585585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:26:41.781 [2024-12-06 18:25:12.585750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.781 [ 00:26:41.781 { 00:26:41.781 "name": "BaseBdev2", 00:26:41.781 "aliases": [ 00:26:41.781 "e1ae83ce-4178-4804-ad1c-2aedfb4ff0c6" 00:26:41.781 ], 00:26:41.781 "product_name": "Malloc disk", 00:26:41.781 "block_size": 512, 00:26:41.781 "num_blocks": 65536, 00:26:41.781 "uuid": "e1ae83ce-4178-4804-ad1c-2aedfb4ff0c6", 00:26:41.781 "assigned_rate_limits": { 00:26:41.781 "rw_ios_per_sec": 0, 00:26:41.781 "rw_mbytes_per_sec": 0, 00:26:41.781 "r_mbytes_per_sec": 0, 00:26:41.781 "w_mbytes_per_sec": 0 00:26:41.781 }, 00:26:41.781 "claimed": true, 00:26:41.781 "claim_type": "exclusive_write", 00:26:41.781 "zoned": false, 00:26:41.781 "supported_io_types": { 00:26:41.781 "read": true, 00:26:41.781 "write": true, 00:26:41.781 "unmap": true, 00:26:41.781 "flush": true, 00:26:41.781 "reset": true, 00:26:41.781 "nvme_admin": false, 00:26:41.781 "nvme_io": false, 00:26:41.781 "nvme_io_md": false, 00:26:41.781 "write_zeroes": true, 00:26:41.781 "zcopy": true, 00:26:41.781 "get_zone_info": false, 00:26:41.781 "zone_management": false, 00:26:41.781 "zone_append": false, 00:26:41.781 "compare": false, 00:26:41.781 "compare_and_write": false, 00:26:41.781 "abort": true, 00:26:41.781 "seek_hole": false, 00:26:41.781 "seek_data": false, 00:26:41.781 "copy": true, 00:26:41.781 "nvme_iov_md": false 00:26:41.781 }, 00:26:41.781 "memory_domains": [ 00:26:41.781 { 00:26:41.781 "dma_device_id": "system", 00:26:41.781 "dma_device_type": 1 00:26:41.781 }, 00:26:41.781 { 00:26:41.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:41.781 "dma_device_type": 2 00:26:41.781 } 00:26:41.781 ], 00:26:41.781 "driver_specific": {} 00:26:41.781 } 00:26:41.781 ] 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:41.781 "name": "Existed_Raid", 00:26:41.781 "uuid": "ae6c4ef2-ca73-4c3e-acf4-08cb724313dd", 00:26:41.781 "strip_size_kb": 64, 00:26:41.781 "state": "online", 00:26:41.781 "raid_level": "concat", 00:26:41.781 "superblock": true, 00:26:41.781 "num_base_bdevs": 2, 00:26:41.781 "num_base_bdevs_discovered": 2, 00:26:41.781 "num_base_bdevs_operational": 2, 00:26:41.781 "base_bdevs_list": [ 00:26:41.781 { 00:26:41.781 "name": "BaseBdev1", 00:26:41.781 "uuid": "d5ff378b-65bf-4241-b3b4-6a95ca4fadad", 00:26:41.781 "is_configured": true, 00:26:41.781 "data_offset": 2048, 00:26:41.781 "data_size": 63488 00:26:41.781 }, 00:26:41.781 { 00:26:41.781 "name": "BaseBdev2", 00:26:41.781 "uuid": "e1ae83ce-4178-4804-ad1c-2aedfb4ff0c6", 00:26:41.781 "is_configured": true, 00:26:41.781 "data_offset": 2048, 00:26:41.781 "data_size": 63488 00:26:41.781 } 00:26:41.781 ] 00:26:41.781 }' 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:41.781 18:25:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.350 [2024-12-06 18:25:13.068455] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:42.350 "name": "Existed_Raid", 00:26:42.350 "aliases": [ 00:26:42.350 "ae6c4ef2-ca73-4c3e-acf4-08cb724313dd" 00:26:42.350 ], 00:26:42.350 "product_name": "Raid Volume", 00:26:42.350 "block_size": 512, 00:26:42.350 "num_blocks": 126976, 00:26:42.350 "uuid": "ae6c4ef2-ca73-4c3e-acf4-08cb724313dd", 00:26:42.350 "assigned_rate_limits": { 00:26:42.350 "rw_ios_per_sec": 0, 00:26:42.350 "rw_mbytes_per_sec": 0, 00:26:42.350 "r_mbytes_per_sec": 0, 00:26:42.350 "w_mbytes_per_sec": 0 00:26:42.350 }, 00:26:42.350 "claimed": false, 00:26:42.350 "zoned": false, 00:26:42.350 "supported_io_types": { 00:26:42.350 "read": true, 00:26:42.350 "write": true, 00:26:42.350 "unmap": true, 00:26:42.350 "flush": true, 00:26:42.350 "reset": true, 00:26:42.350 "nvme_admin": false, 00:26:42.350 "nvme_io": false, 00:26:42.350 "nvme_io_md": false, 00:26:42.350 "write_zeroes": true, 00:26:42.350 "zcopy": false, 00:26:42.350 "get_zone_info": false, 00:26:42.350 "zone_management": false, 00:26:42.350 "zone_append": false, 00:26:42.350 "compare": false, 00:26:42.350 "compare_and_write": false, 00:26:42.350 "abort": false, 00:26:42.350 "seek_hole": false, 00:26:42.350 "seek_data": false, 00:26:42.350 "copy": false, 00:26:42.350 "nvme_iov_md": false 00:26:42.350 }, 00:26:42.350 "memory_domains": [ 00:26:42.350 { 00:26:42.350 "dma_device_id": "system", 00:26:42.350 "dma_device_type": 1 00:26:42.350 }, 00:26:42.350 { 00:26:42.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.350 "dma_device_type": 2 00:26:42.350 }, 00:26:42.350 { 00:26:42.350 "dma_device_id": "system", 00:26:42.350 "dma_device_type": 1 00:26:42.350 }, 00:26:42.350 { 00:26:42.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:42.350 "dma_device_type": 2 00:26:42.350 } 00:26:42.350 ], 00:26:42.350 "driver_specific": { 00:26:42.350 "raid": { 00:26:42.350 "uuid": "ae6c4ef2-ca73-4c3e-acf4-08cb724313dd", 00:26:42.350 "strip_size_kb": 64, 00:26:42.350 "state": "online", 00:26:42.350 "raid_level": "concat", 00:26:42.350 "superblock": true, 00:26:42.350 "num_base_bdevs": 2, 00:26:42.350 "num_base_bdevs_discovered": 2, 00:26:42.350 "num_base_bdevs_operational": 2, 00:26:42.350 "base_bdevs_list": [ 00:26:42.350 { 00:26:42.350 "name": "BaseBdev1", 00:26:42.350 "uuid": "d5ff378b-65bf-4241-b3b4-6a95ca4fadad", 00:26:42.350 "is_configured": true, 00:26:42.350 "data_offset": 2048, 00:26:42.350 "data_size": 63488 00:26:42.350 }, 00:26:42.350 { 00:26:42.350 "name": "BaseBdev2", 00:26:42.350 "uuid": "e1ae83ce-4178-4804-ad1c-2aedfb4ff0c6", 00:26:42.350 "is_configured": true, 00:26:42.350 "data_offset": 2048, 00:26:42.350 "data_size": 63488 00:26:42.350 } 00:26:42.350 ] 00:26:42.350 } 00:26:42.350 } 00:26:42.350 }' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:26:42.350 BaseBdev2' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.350 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.350 [2024-12-06 18:25:13.287950] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:26:42.350 [2024-12-06 18:25:13.287997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:42.350 [2024-12-06 18:25:13.288068] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:42.610 "name": "Existed_Raid", 00:26:42.610 "uuid": "ae6c4ef2-ca73-4c3e-acf4-08cb724313dd", 00:26:42.610 "strip_size_kb": 64, 00:26:42.610 "state": "offline", 00:26:42.610 "raid_level": "concat", 00:26:42.610 "superblock": true, 00:26:42.610 "num_base_bdevs": 2, 00:26:42.610 "num_base_bdevs_discovered": 1, 00:26:42.610 "num_base_bdevs_operational": 1, 00:26:42.610 "base_bdevs_list": [ 00:26:42.610 { 00:26:42.610 "name": null, 00:26:42.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:42.610 "is_configured": false, 00:26:42.610 "data_offset": 0, 00:26:42.610 "data_size": 63488 00:26:42.610 }, 00:26:42.610 { 00:26:42.610 "name": "BaseBdev2", 00:26:42.610 "uuid": "e1ae83ce-4178-4804-ad1c-2aedfb4ff0c6", 00:26:42.610 "is_configured": true, 00:26:42.610 "data_offset": 2048, 00:26:42.610 "data_size": 63488 00:26:42.610 } 00:26:42.610 ] 00:26:42.610 }' 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:42.610 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.178 [2024-12-06 18:25:13.898451] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:26:43.178 [2024-12-06 18:25:13.898516] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:26:43.178 18:25:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61691 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61691 ']' 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61691 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61691 00:26:43.178 killing process with pid 61691 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61691' 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61691 00:26:43.178 [2024-12-06 18:25:14.102010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:43.178 18:25:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61691 00:26:43.178 [2024-12-06 18:25:14.120029] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:44.552 18:25:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:26:44.552 00:26:44.552 real 0m5.203s 00:26:44.552 user 0m7.342s 00:26:44.552 sys 0m1.083s 00:26:44.552 18:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.552 18:25:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:26:44.552 ************************************ 00:26:44.552 END TEST raid_state_function_test_sb 00:26:44.552 ************************************ 00:26:44.552 18:25:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:26:44.552 18:25:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:44.552 18:25:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.552 18:25:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:44.552 ************************************ 00:26:44.552 START TEST raid_superblock_test 00:26:44.552 ************************************ 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61943 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61943 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61943 ']' 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:44.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:44.552 18:25:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:44.552 [2024-12-06 18:25:15.491491] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:44.552 [2024-12-06 18:25:15.491617] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61943 ] 00:26:44.810 [2024-12-06 18:25:15.674898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:45.121 [2024-12-06 18:25:15.800699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:45.121 [2024-12-06 18:25:16.024632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:45.121 [2024-12-06 18:25:16.024696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.687 malloc1 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.687 [2024-12-06 18:25:16.427014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:45.687 [2024-12-06 18:25:16.427084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.687 [2024-12-06 18:25:16.427111] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:26:45.687 [2024-12-06 18:25:16.427136] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.687 [2024-12-06 18:25:16.429748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.687 [2024-12-06 18:25:16.429794] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:45.687 pt1 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.687 malloc2 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.687 [2024-12-06 18:25:16.485789] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:45.687 [2024-12-06 18:25:16.485858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:45.687 [2024-12-06 18:25:16.485896] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:26:45.687 [2024-12-06 18:25:16.485909] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:45.687 [2024-12-06 18:25:16.488517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:45.687 [2024-12-06 18:25:16.488560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:45.687 pt2 00:26:45.687 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.688 [2024-12-06 18:25:16.497849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:45.688 [2024-12-06 18:25:16.500091] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:45.688 [2024-12-06 18:25:16.500283] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:26:45.688 [2024-12-06 18:25:16.500299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:45.688 [2024-12-06 18:25:16.500608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:26:45.688 [2024-12-06 18:25:16.500777] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:26:45.688 [2024-12-06 18:25:16.500791] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:26:45.688 [2024-12-06 18:25:16.500957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:45.688 "name": "raid_bdev1", 00:26:45.688 "uuid": "3ec7777a-07e1-4c46-a581-81202eee655e", 00:26:45.688 "strip_size_kb": 64, 00:26:45.688 "state": "online", 00:26:45.688 "raid_level": "concat", 00:26:45.688 "superblock": true, 00:26:45.688 "num_base_bdevs": 2, 00:26:45.688 "num_base_bdevs_discovered": 2, 00:26:45.688 "num_base_bdevs_operational": 2, 00:26:45.688 "base_bdevs_list": [ 00:26:45.688 { 00:26:45.688 "name": "pt1", 00:26:45.688 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:45.688 "is_configured": true, 00:26:45.688 "data_offset": 2048, 00:26:45.688 "data_size": 63488 00:26:45.688 }, 00:26:45.688 { 00:26:45.688 "name": "pt2", 00:26:45.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:45.688 "is_configured": true, 00:26:45.688 "data_offset": 2048, 00:26:45.688 "data_size": 63488 00:26:45.688 } 00:26:45.688 ] 00:26:45.688 }' 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:45.688 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:46.255 [2024-12-06 18:25:16.945840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.255 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:46.255 "name": "raid_bdev1", 00:26:46.255 "aliases": [ 00:26:46.255 "3ec7777a-07e1-4c46-a581-81202eee655e" 00:26:46.255 ], 00:26:46.255 "product_name": "Raid Volume", 00:26:46.255 "block_size": 512, 00:26:46.255 "num_blocks": 126976, 00:26:46.255 "uuid": "3ec7777a-07e1-4c46-a581-81202eee655e", 00:26:46.255 "assigned_rate_limits": { 00:26:46.255 "rw_ios_per_sec": 0, 00:26:46.255 "rw_mbytes_per_sec": 0, 00:26:46.255 "r_mbytes_per_sec": 0, 00:26:46.255 "w_mbytes_per_sec": 0 00:26:46.255 }, 00:26:46.255 "claimed": false, 00:26:46.255 "zoned": false, 00:26:46.255 "supported_io_types": { 00:26:46.255 "read": true, 00:26:46.255 "write": true, 00:26:46.255 "unmap": true, 00:26:46.255 "flush": true, 00:26:46.255 "reset": true, 00:26:46.255 "nvme_admin": false, 00:26:46.255 "nvme_io": false, 00:26:46.255 "nvme_io_md": false, 00:26:46.255 "write_zeroes": true, 00:26:46.255 "zcopy": false, 00:26:46.255 "get_zone_info": false, 00:26:46.255 "zone_management": false, 00:26:46.255 "zone_append": false, 00:26:46.255 "compare": false, 00:26:46.255 "compare_and_write": false, 00:26:46.255 "abort": false, 00:26:46.255 "seek_hole": false, 00:26:46.255 "seek_data": false, 00:26:46.255 "copy": false, 00:26:46.255 "nvme_iov_md": false 00:26:46.255 }, 00:26:46.255 "memory_domains": [ 00:26:46.255 { 00:26:46.255 "dma_device_id": "system", 00:26:46.255 "dma_device_type": 1 00:26:46.255 }, 00:26:46.255 { 00:26:46.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.255 "dma_device_type": 2 00:26:46.255 }, 00:26:46.255 { 00:26:46.255 "dma_device_id": "system", 00:26:46.255 "dma_device_type": 1 00:26:46.255 }, 00:26:46.255 { 00:26:46.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:46.255 "dma_device_type": 2 00:26:46.255 } 00:26:46.255 ], 00:26:46.255 "driver_specific": { 00:26:46.255 "raid": { 00:26:46.255 "uuid": "3ec7777a-07e1-4c46-a581-81202eee655e", 00:26:46.255 "strip_size_kb": 64, 00:26:46.255 "state": "online", 00:26:46.255 "raid_level": "concat", 00:26:46.255 "superblock": true, 00:26:46.255 "num_base_bdevs": 2, 00:26:46.255 "num_base_bdevs_discovered": 2, 00:26:46.255 "num_base_bdevs_operational": 2, 00:26:46.255 "base_bdevs_list": [ 00:26:46.255 { 00:26:46.255 "name": "pt1", 00:26:46.255 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:46.255 "is_configured": true, 00:26:46.255 "data_offset": 2048, 00:26:46.255 "data_size": 63488 00:26:46.255 }, 00:26:46.255 { 00:26:46.255 "name": "pt2", 00:26:46.255 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:46.255 "is_configured": true, 00:26:46.255 "data_offset": 2048, 00:26:46.255 "data_size": 63488 00:26:46.255 } 00:26:46.255 ] 00:26:46.255 } 00:26:46.255 } 00:26:46.255 }' 00:26:46.256 18:25:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:46.256 pt2' 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:26:46.256 [2024-12-06 18:25:17.165820] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:46.256 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.514 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3ec7777a-07e1-4c46-a581-81202eee655e 00:26:46.514 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 3ec7777a-07e1-4c46-a581-81202eee655e ']' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 [2024-12-06 18:25:17.209512] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:46.515 [2024-12-06 18:25:17.209548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:46.515 [2024-12-06 18:25:17.209653] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:46.515 [2024-12-06 18:25:17.209705] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:46.515 [2024-12-06 18:25:17.209721] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 [2024-12-06 18:25:17.345563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:26:46.515 [2024-12-06 18:25:17.347934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:26:46.515 [2024-12-06 18:25:17.348026] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:26:46.515 [2024-12-06 18:25:17.348087] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:26:46.515 [2024-12-06 18:25:17.348105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:46.515 [2024-12-06 18:25:17.348118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:26:46.515 request: 00:26:46.515 { 00:26:46.515 "name": "raid_bdev1", 00:26:46.515 "raid_level": "concat", 00:26:46.515 "base_bdevs": [ 00:26:46.515 "malloc1", 00:26:46.515 "malloc2" 00:26:46.515 ], 00:26:46.515 "strip_size_kb": 64, 00:26:46.515 "superblock": false, 00:26:46.515 "method": "bdev_raid_create", 00:26:46.515 "req_id": 1 00:26:46.515 } 00:26:46.515 Got JSON-RPC error response 00:26:46.515 response: 00:26:46.515 { 00:26:46.515 "code": -17, 00:26:46.515 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:26:46.515 } 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 [2024-12-06 18:25:17.405545] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:26:46.515 [2024-12-06 18:25:17.405624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:46.515 [2024-12-06 18:25:17.405646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:26:46.515 [2024-12-06 18:25:17.405661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:46.515 [2024-12-06 18:25:17.408447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:46.515 [2024-12-06 18:25:17.408491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:26:46.515 [2024-12-06 18:25:17.408584] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:26:46.515 [2024-12-06 18:25:17.408643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:26:46.515 pt1 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:46.515 "name": "raid_bdev1", 00:26:46.515 "uuid": "3ec7777a-07e1-4c46-a581-81202eee655e", 00:26:46.515 "strip_size_kb": 64, 00:26:46.515 "state": "configuring", 00:26:46.515 "raid_level": "concat", 00:26:46.515 "superblock": true, 00:26:46.515 "num_base_bdevs": 2, 00:26:46.515 "num_base_bdevs_discovered": 1, 00:26:46.515 "num_base_bdevs_operational": 2, 00:26:46.515 "base_bdevs_list": [ 00:26:46.515 { 00:26:46.515 "name": "pt1", 00:26:46.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:46.515 "is_configured": true, 00:26:46.515 "data_offset": 2048, 00:26:46.515 "data_size": 63488 00:26:46.515 }, 00:26:46.515 { 00:26:46.515 "name": null, 00:26:46.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:46.515 "is_configured": false, 00:26:46.515 "data_offset": 2048, 00:26:46.515 "data_size": 63488 00:26:46.515 } 00:26:46.515 ] 00:26:46.515 }' 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:46.515 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.083 [2024-12-06 18:25:17.861138] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:26:47.083 [2024-12-06 18:25:17.861233] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:47.083 [2024-12-06 18:25:17.861259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:26:47.083 [2024-12-06 18:25:17.861275] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:47.083 [2024-12-06 18:25:17.861785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:47.083 [2024-12-06 18:25:17.861811] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:26:47.083 [2024-12-06 18:25:17.861902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:26:47.083 [2024-12-06 18:25:17.861933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:26:47.083 [2024-12-06 18:25:17.862045] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:47.083 [2024-12-06 18:25:17.862059] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:47.083 [2024-12-06 18:25:17.862341] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:26:47.083 [2024-12-06 18:25:17.862492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:47.083 [2024-12-06 18:25:17.862513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:47.083 [2024-12-06 18:25:17.862659] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:47.083 pt2 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:47.083 "name": "raid_bdev1", 00:26:47.083 "uuid": "3ec7777a-07e1-4c46-a581-81202eee655e", 00:26:47.083 "strip_size_kb": 64, 00:26:47.083 "state": "online", 00:26:47.083 "raid_level": "concat", 00:26:47.083 "superblock": true, 00:26:47.083 "num_base_bdevs": 2, 00:26:47.083 "num_base_bdevs_discovered": 2, 00:26:47.083 "num_base_bdevs_operational": 2, 00:26:47.083 "base_bdevs_list": [ 00:26:47.083 { 00:26:47.083 "name": "pt1", 00:26:47.083 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:47.083 "is_configured": true, 00:26:47.083 "data_offset": 2048, 00:26:47.083 "data_size": 63488 00:26:47.083 }, 00:26:47.083 { 00:26:47.083 "name": "pt2", 00:26:47.083 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:47.083 "is_configured": true, 00:26:47.083 "data_offset": 2048, 00:26:47.083 "data_size": 63488 00:26:47.083 } 00:26:47.083 ] 00:26:47.083 }' 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:47.083 18:25:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.341 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.341 [2024-12-06 18:25:18.284808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:26:47.601 "name": "raid_bdev1", 00:26:47.601 "aliases": [ 00:26:47.601 "3ec7777a-07e1-4c46-a581-81202eee655e" 00:26:47.601 ], 00:26:47.601 "product_name": "Raid Volume", 00:26:47.601 "block_size": 512, 00:26:47.601 "num_blocks": 126976, 00:26:47.601 "uuid": "3ec7777a-07e1-4c46-a581-81202eee655e", 00:26:47.601 "assigned_rate_limits": { 00:26:47.601 "rw_ios_per_sec": 0, 00:26:47.601 "rw_mbytes_per_sec": 0, 00:26:47.601 "r_mbytes_per_sec": 0, 00:26:47.601 "w_mbytes_per_sec": 0 00:26:47.601 }, 00:26:47.601 "claimed": false, 00:26:47.601 "zoned": false, 00:26:47.601 "supported_io_types": { 00:26:47.601 "read": true, 00:26:47.601 "write": true, 00:26:47.601 "unmap": true, 00:26:47.601 "flush": true, 00:26:47.601 "reset": true, 00:26:47.601 "nvme_admin": false, 00:26:47.601 "nvme_io": false, 00:26:47.601 "nvme_io_md": false, 00:26:47.601 "write_zeroes": true, 00:26:47.601 "zcopy": false, 00:26:47.601 "get_zone_info": false, 00:26:47.601 "zone_management": false, 00:26:47.601 "zone_append": false, 00:26:47.601 "compare": false, 00:26:47.601 "compare_and_write": false, 00:26:47.601 "abort": false, 00:26:47.601 "seek_hole": false, 00:26:47.601 "seek_data": false, 00:26:47.601 "copy": false, 00:26:47.601 "nvme_iov_md": false 00:26:47.601 }, 00:26:47.601 "memory_domains": [ 00:26:47.601 { 00:26:47.601 "dma_device_id": "system", 00:26:47.601 "dma_device_type": 1 00:26:47.601 }, 00:26:47.601 { 00:26:47.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.601 "dma_device_type": 2 00:26:47.601 }, 00:26:47.601 { 00:26:47.601 "dma_device_id": "system", 00:26:47.601 "dma_device_type": 1 00:26:47.601 }, 00:26:47.601 { 00:26:47.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:47.601 "dma_device_type": 2 00:26:47.601 } 00:26:47.601 ], 00:26:47.601 "driver_specific": { 00:26:47.601 "raid": { 00:26:47.601 "uuid": "3ec7777a-07e1-4c46-a581-81202eee655e", 00:26:47.601 "strip_size_kb": 64, 00:26:47.601 "state": "online", 00:26:47.601 "raid_level": "concat", 00:26:47.601 "superblock": true, 00:26:47.601 "num_base_bdevs": 2, 00:26:47.601 "num_base_bdevs_discovered": 2, 00:26:47.601 "num_base_bdevs_operational": 2, 00:26:47.601 "base_bdevs_list": [ 00:26:47.601 { 00:26:47.601 "name": "pt1", 00:26:47.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:26:47.601 "is_configured": true, 00:26:47.601 "data_offset": 2048, 00:26:47.601 "data_size": 63488 00:26:47.601 }, 00:26:47.601 { 00:26:47.601 "name": "pt2", 00:26:47.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:26:47.601 "is_configured": true, 00:26:47.601 "data_offset": 2048, 00:26:47.601 "data_size": 63488 00:26:47.601 } 00:26:47.601 ] 00:26:47.601 } 00:26:47.601 } 00:26:47.601 }' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:26:47.601 pt2' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:26:47.601 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:47.602 [2024-12-06 18:25:18.504610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 3ec7777a-07e1-4c46-a581-81202eee655e '!=' 3ec7777a-07e1-4c46-a581-81202eee655e ']' 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61943 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61943 ']' 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61943 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:47.602 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61943 00:26:47.861 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:47.861 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:47.861 killing process with pid 61943 00:26:47.861 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61943' 00:26:47.861 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61943 00:26:47.861 [2024-12-06 18:25:18.587352] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:47.861 [2024-12-06 18:25:18.587459] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:47.861 18:25:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61943 00:26:47.861 [2024-12-06 18:25:18.587512] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:47.861 [2024-12-06 18:25:18.587526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:47.861 [2024-12-06 18:25:18.806584] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:49.239 18:25:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:26:49.239 00:26:49.239 real 0m4.618s 00:26:49.239 user 0m6.387s 00:26:49.239 sys 0m0.886s 00:26:49.239 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:49.239 18:25:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.239 ************************************ 00:26:49.239 END TEST raid_superblock_test 00:26:49.239 ************************************ 00:26:49.239 18:25:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:26:49.239 18:25:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:49.239 18:25:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:49.239 18:25:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:49.239 ************************************ 00:26:49.239 START TEST raid_read_error_test 00:26:49.239 ************************************ 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:49.239 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v3v2gD5qmF 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62159 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62159 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62159 ']' 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.240 18:25:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:49.499 [2024-12-06 18:25:20.212369] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:49.499 [2024-12-06 18:25:20.212500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62159 ] 00:26:49.499 [2024-12-06 18:25:20.395383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.758 [2024-12-06 18:25:20.521781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.018 [2024-12-06 18:25:20.744699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:50.018 [2024-12-06 18:25:20.744773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.277 BaseBdev1_malloc 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.277 true 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.277 [2024-12-06 18:25:21.193443] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:50.277 [2024-12-06 18:25:21.193521] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.277 [2024-12-06 18:25:21.193550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:50.277 [2024-12-06 18:25:21.193566] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.277 [2024-12-06 18:25:21.196315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.277 [2024-12-06 18:25:21.196365] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:50.277 BaseBdev1 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.277 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.537 BaseBdev2_malloc 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.537 true 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.537 [2024-12-06 18:25:21.264182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:50.537 [2024-12-06 18:25:21.264260] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:50.537 [2024-12-06 18:25:21.264283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:50.537 [2024-12-06 18:25:21.264298] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:50.537 [2024-12-06 18:25:21.266986] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:50.537 [2024-12-06 18:25:21.267039] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:50.537 BaseBdev2 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.537 [2024-12-06 18:25:21.276247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:50.537 [2024-12-06 18:25:21.278597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:50.537 [2024-12-06 18:25:21.278817] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:50.537 [2024-12-06 18:25:21.278834] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:50.537 [2024-12-06 18:25:21.279128] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:50.537 [2024-12-06 18:25:21.279341] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:50.537 [2024-12-06 18:25:21.279373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:50.537 [2024-12-06 18:25:21.279549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:50.537 "name": "raid_bdev1", 00:26:50.537 "uuid": "08ea89a3-e14c-40ec-9754-34496a6454c6", 00:26:50.537 "strip_size_kb": 64, 00:26:50.537 "state": "online", 00:26:50.537 "raid_level": "concat", 00:26:50.537 "superblock": true, 00:26:50.537 "num_base_bdevs": 2, 00:26:50.537 "num_base_bdevs_discovered": 2, 00:26:50.537 "num_base_bdevs_operational": 2, 00:26:50.537 "base_bdevs_list": [ 00:26:50.537 { 00:26:50.537 "name": "BaseBdev1", 00:26:50.537 "uuid": "488e0f65-e6ec-53cb-99ae-a2b1293502ba", 00:26:50.537 "is_configured": true, 00:26:50.537 "data_offset": 2048, 00:26:50.537 "data_size": 63488 00:26:50.537 }, 00:26:50.537 { 00:26:50.537 "name": "BaseBdev2", 00:26:50.537 "uuid": "4dc95298-ef4c-5b99-b69b-c70dc7dac4e6", 00:26:50.537 "is_configured": true, 00:26:50.537 "data_offset": 2048, 00:26:50.537 "data_size": 63488 00:26:50.537 } 00:26:50.537 ] 00:26:50.537 }' 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:50.537 18:25:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:50.796 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:50.796 18:25:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:51.056 [2024-12-06 18:25:21.848917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:52.005 "name": "raid_bdev1", 00:26:52.005 "uuid": "08ea89a3-e14c-40ec-9754-34496a6454c6", 00:26:52.005 "strip_size_kb": 64, 00:26:52.005 "state": "online", 00:26:52.005 "raid_level": "concat", 00:26:52.005 "superblock": true, 00:26:52.005 "num_base_bdevs": 2, 00:26:52.005 "num_base_bdevs_discovered": 2, 00:26:52.005 "num_base_bdevs_operational": 2, 00:26:52.005 "base_bdevs_list": [ 00:26:52.005 { 00:26:52.005 "name": "BaseBdev1", 00:26:52.005 "uuid": "488e0f65-e6ec-53cb-99ae-a2b1293502ba", 00:26:52.005 "is_configured": true, 00:26:52.005 "data_offset": 2048, 00:26:52.005 "data_size": 63488 00:26:52.005 }, 00:26:52.005 { 00:26:52.005 "name": "BaseBdev2", 00:26:52.005 "uuid": "4dc95298-ef4c-5b99-b69b-c70dc7dac4e6", 00:26:52.005 "is_configured": true, 00:26:52.005 "data_offset": 2048, 00:26:52.005 "data_size": 63488 00:26:52.005 } 00:26:52.005 ] 00:26:52.005 }' 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:52.005 18:25:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:52.574 [2024-12-06 18:25:23.224635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:52.574 [2024-12-06 18:25:23.224701] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:52.574 [2024-12-06 18:25:23.227652] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:52.574 [2024-12-06 18:25:23.227713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:52.574 [2024-12-06 18:25:23.227748] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:52.574 [2024-12-06 18:25:23.227763] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:52.574 { 00:26:52.574 "results": [ 00:26:52.574 { 00:26:52.574 "job": "raid_bdev1", 00:26:52.574 "core_mask": "0x1", 00:26:52.574 "workload": "randrw", 00:26:52.574 "percentage": 50, 00:26:52.574 "status": "finished", 00:26:52.574 "queue_depth": 1, 00:26:52.574 "io_size": 131072, 00:26:52.574 "runtime": 1.375818, 00:26:52.574 "iops": 14732.32651411742, 00:26:52.574 "mibps": 1841.5408142646775, 00:26:52.574 "io_failed": 1, 00:26:52.574 "io_timeout": 0, 00:26:52.574 "avg_latency_us": 93.65951747790373, 00:26:52.574 "min_latency_us": 27.553413654618474, 00:26:52.574 "max_latency_us": 1526.5413654618474 00:26:52.574 } 00:26:52.574 ], 00:26:52.574 "core_count": 1 00:26:52.574 } 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62159 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62159 ']' 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62159 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62159 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:52.574 killing process with pid 62159 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62159' 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62159 00:26:52.574 [2024-12-06 18:25:23.281348] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:52.574 18:25:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62159 00:26:52.574 [2024-12-06 18:25:23.427698] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v3v2gD5qmF 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:26:53.955 00:26:53.955 real 0m4.611s 00:26:53.955 user 0m5.510s 00:26:53.955 sys 0m0.686s 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:53.955 18:25:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:53.955 ************************************ 00:26:53.955 END TEST raid_read_error_test 00:26:53.955 ************************************ 00:26:53.955 18:25:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:26:53.955 18:25:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:53.955 18:25:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:53.955 18:25:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:53.955 ************************************ 00:26:53.955 START TEST raid_write_error_test 00:26:53.955 ************************************ 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.IHMRy0L7Bk 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62300 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62300 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62300 ']' 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.955 18:25:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.215 [2024-12-06 18:25:24.940957] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:54.215 [2024-12-06 18:25:24.941231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62300 ] 00:26:54.215 [2024-12-06 18:25:25.152128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:54.474 [2024-12-06 18:25:25.277035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:54.732 [2024-12-06 18:25:25.500270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:54.732 [2024-12-06 18:25:25.500313] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.992 BaseBdev1_malloc 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.992 true 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.992 [2024-12-06 18:25:25.834191] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:26:54.992 [2024-12-06 18:25:25.834261] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.992 [2024-12-06 18:25:25.834288] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:26:54.992 [2024-12-06 18:25:25.834304] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.992 [2024-12-06 18:25:25.837017] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.992 [2024-12-06 18:25:25.837071] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:26:54.992 BaseBdev1 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.992 BaseBdev2_malloc 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.992 true 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.992 [2024-12-06 18:25:25.905543] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:26:54.992 [2024-12-06 18:25:25.905616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:26:54.992 [2024-12-06 18:25:25.905640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:26:54.992 [2024-12-06 18:25:25.905656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:26:54.992 [2024-12-06 18:25:25.908313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:26:54.992 [2024-12-06 18:25:25.908362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:26:54.992 BaseBdev2 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:54.992 [2024-12-06 18:25:25.917679] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:54.992 [2024-12-06 18:25:25.920065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:26:54.992 [2024-12-06 18:25:25.920321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:26:54.992 [2024-12-06 18:25:25.920348] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:26:54.992 [2024-12-06 18:25:25.920694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:26:54.992 [2024-12-06 18:25:25.920915] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:26:54.992 [2024-12-06 18:25:25.920937] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:26:54.992 [2024-12-06 18:25:25.921128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.992 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.250 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:55.250 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:55.250 "name": "raid_bdev1", 00:26:55.250 "uuid": "513a7624-511d-4cee-9bf1-d729295d7f2f", 00:26:55.250 "strip_size_kb": 64, 00:26:55.250 "state": "online", 00:26:55.250 "raid_level": "concat", 00:26:55.250 "superblock": true, 00:26:55.250 "num_base_bdevs": 2, 00:26:55.250 "num_base_bdevs_discovered": 2, 00:26:55.250 "num_base_bdevs_operational": 2, 00:26:55.250 "base_bdevs_list": [ 00:26:55.250 { 00:26:55.250 "name": "BaseBdev1", 00:26:55.250 "uuid": "12e76d27-aed4-5a2f-8283-94caf3546ad2", 00:26:55.250 "is_configured": true, 00:26:55.250 "data_offset": 2048, 00:26:55.250 "data_size": 63488 00:26:55.250 }, 00:26:55.250 { 00:26:55.250 "name": "BaseBdev2", 00:26:55.250 "uuid": "8dd73a2b-7888-54e2-abd2-8179e72e38f0", 00:26:55.250 "is_configured": true, 00:26:55.250 "data_offset": 2048, 00:26:55.250 "data_size": 63488 00:26:55.250 } 00:26:55.250 ] 00:26:55.250 }' 00:26:55.250 18:25:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:55.250 18:25:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:55.509 18:25:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:26:55.509 18:25:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:26:55.768 [2024-12-06 18:25:26.467043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:56.738 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:56.739 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:26:56.739 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:56.739 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.739 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.739 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.739 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:56.739 "name": "raid_bdev1", 00:26:56.739 "uuid": "513a7624-511d-4cee-9bf1-d729295d7f2f", 00:26:56.739 "strip_size_kb": 64, 00:26:56.739 "state": "online", 00:26:56.739 "raid_level": "concat", 00:26:56.739 "superblock": true, 00:26:56.739 "num_base_bdevs": 2, 00:26:56.739 "num_base_bdevs_discovered": 2, 00:26:56.739 "num_base_bdevs_operational": 2, 00:26:56.739 "base_bdevs_list": [ 00:26:56.739 { 00:26:56.739 "name": "BaseBdev1", 00:26:56.739 "uuid": "12e76d27-aed4-5a2f-8283-94caf3546ad2", 00:26:56.739 "is_configured": true, 00:26:56.739 "data_offset": 2048, 00:26:56.739 "data_size": 63488 00:26:56.739 }, 00:26:56.739 { 00:26:56.739 "name": "BaseBdev2", 00:26:56.739 "uuid": "8dd73a2b-7888-54e2-abd2-8179e72e38f0", 00:26:56.739 "is_configured": true, 00:26:56.739 "data_offset": 2048, 00:26:56.739 "data_size": 63488 00:26:56.739 } 00:26:56.739 ] 00:26:56.739 }' 00:26:56.739 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:56.739 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:56.998 [2024-12-06 18:25:27.864420] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:26:56.998 [2024-12-06 18:25:27.864481] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:26:56.998 [2024-12-06 18:25:27.867351] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:26:56.998 [2024-12-06 18:25:27.867405] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:26:56.998 [2024-12-06 18:25:27.867439] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:26:56.998 [2024-12-06 18:25:27.867457] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:26:56.998 { 00:26:56.998 "results": [ 00:26:56.998 { 00:26:56.998 "job": "raid_bdev1", 00:26:56.998 "core_mask": "0x1", 00:26:56.998 "workload": "randrw", 00:26:56.998 "percentage": 50, 00:26:56.998 "status": "finished", 00:26:56.998 "queue_depth": 1, 00:26:56.998 "io_size": 131072, 00:26:56.998 "runtime": 1.397472, 00:26:56.998 "iops": 14768.811110347828, 00:26:56.998 "mibps": 1846.1013887934785, 00:26:56.998 "io_failed": 1, 00:26:56.998 "io_timeout": 0, 00:26:56.998 "avg_latency_us": 93.4687164160518, 00:26:56.998 "min_latency_us": 27.347791164658634, 00:26:56.998 "max_latency_us": 1605.5004016064256 00:26:56.998 } 00:26:56.998 ], 00:26:56.998 "core_count": 1 00:26:56.998 } 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62300 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62300 ']' 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62300 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62300 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:56.998 killing process with pid 62300 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62300' 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62300 00:26:56.998 [2024-12-06 18:25:27.904772] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:26:56.998 18:25:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62300 00:26:57.257 [2024-12-06 18:25:28.050995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.IHMRy0L7Bk 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:26:58.633 00:26:58.633 real 0m4.540s 00:26:58.633 user 0m5.362s 00:26:58.633 sys 0m0.674s 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:58.633 18:25:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.633 ************************************ 00:26:58.633 END TEST raid_write_error_test 00:26:58.633 ************************************ 00:26:58.633 18:25:29 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:26:58.633 18:25:29 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:26:58.633 18:25:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:58.633 18:25:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:58.633 18:25:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:26:58.633 ************************************ 00:26:58.633 START TEST raid_state_function_test 00:26:58.633 ************************************ 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:26:58.633 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62444 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62444' 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:26:58.634 Process raid pid: 62444 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62444 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62444 ']' 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:58.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:58.634 18:25:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:58.634 [2024-12-06 18:25:29.515008] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:26:58.634 [2024-12-06 18:25:29.515140] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.892 [2024-12-06 18:25:29.702884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:58.892 [2024-12-06 18:25:29.827376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.151 [2024-12-06 18:25:30.057214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:59.151 [2024-12-06 18:25:30.057266] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.718 [2024-12-06 18:25:30.376725] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:59.718 [2024-12-06 18:25:30.376790] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:59.718 [2024-12-06 18:25:30.376803] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:59.718 [2024-12-06 18:25:30.376817] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:26:59.718 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.719 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.719 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:26:59.719 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.719 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:26:59.719 "name": "Existed_Raid", 00:26:59.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.719 "strip_size_kb": 0, 00:26:59.719 "state": "configuring", 00:26:59.719 "raid_level": "raid1", 00:26:59.719 "superblock": false, 00:26:59.719 "num_base_bdevs": 2, 00:26:59.719 "num_base_bdevs_discovered": 0, 00:26:59.719 "num_base_bdevs_operational": 2, 00:26:59.719 "base_bdevs_list": [ 00:26:59.719 { 00:26:59.719 "name": "BaseBdev1", 00:26:59.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.719 "is_configured": false, 00:26:59.719 "data_offset": 0, 00:26:59.719 "data_size": 0 00:26:59.719 }, 00:26:59.719 { 00:26:59.719 "name": "BaseBdev2", 00:26:59.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:26:59.719 "is_configured": false, 00:26:59.719 "data_offset": 0, 00:26:59.719 "data_size": 0 00:26:59.719 } 00:26:59.719 ] 00:26:59.719 }' 00:26:59.719 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:26:59.719 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.978 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:26:59.978 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.978 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.978 [2024-12-06 18:25:30.816087] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:26:59.978 [2024-12-06 18:25:30.816134] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:26:59.978 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.978 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.979 [2024-12-06 18:25:30.828057] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:26:59.979 [2024-12-06 18:25:30.828110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:26:59.979 [2024-12-06 18:25:30.828134] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:26:59.979 [2024-12-06 18:25:30.828150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.979 [2024-12-06 18:25:30.880367] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:26:59.979 BaseBdev1 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:26:59.979 [ 00:26:59.979 { 00:26:59.979 "name": "BaseBdev1", 00:26:59.979 "aliases": [ 00:26:59.979 "0765d87c-9283-429d-a38c-ec0d41cf0f5d" 00:26:59.979 ], 00:26:59.979 "product_name": "Malloc disk", 00:26:59.979 "block_size": 512, 00:26:59.979 "num_blocks": 65536, 00:26:59.979 "uuid": "0765d87c-9283-429d-a38c-ec0d41cf0f5d", 00:26:59.979 "assigned_rate_limits": { 00:26:59.979 "rw_ios_per_sec": 0, 00:26:59.979 "rw_mbytes_per_sec": 0, 00:26:59.979 "r_mbytes_per_sec": 0, 00:26:59.979 "w_mbytes_per_sec": 0 00:26:59.979 }, 00:26:59.979 "claimed": true, 00:26:59.979 "claim_type": "exclusive_write", 00:26:59.979 "zoned": false, 00:26:59.979 "supported_io_types": { 00:26:59.979 "read": true, 00:26:59.979 "write": true, 00:26:59.979 "unmap": true, 00:26:59.979 "flush": true, 00:26:59.979 "reset": true, 00:26:59.979 "nvme_admin": false, 00:26:59.979 "nvme_io": false, 00:26:59.979 "nvme_io_md": false, 00:26:59.979 "write_zeroes": true, 00:26:59.979 "zcopy": true, 00:26:59.979 "get_zone_info": false, 00:26:59.979 "zone_management": false, 00:26:59.979 "zone_append": false, 00:26:59.979 "compare": false, 00:26:59.979 "compare_and_write": false, 00:26:59.979 "abort": true, 00:26:59.979 "seek_hole": false, 00:26:59.979 "seek_data": false, 00:26:59.979 "copy": true, 00:26:59.979 "nvme_iov_md": false 00:26:59.979 }, 00:26:59.979 "memory_domains": [ 00:26:59.979 { 00:26:59.979 "dma_device_id": "system", 00:26:59.979 "dma_device_type": 1 00:26:59.979 }, 00:26:59.979 { 00:26:59.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:26:59.979 "dma_device_type": 2 00:26:59.979 } 00:26:59.979 ], 00:26:59.979 "driver_specific": {} 00:26:59.979 } 00:26:59.979 ] 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:26:59.979 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:00.238 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.238 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.238 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:00.238 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.238 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.238 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:00.238 "name": "Existed_Raid", 00:27:00.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.238 "strip_size_kb": 0, 00:27:00.238 "state": "configuring", 00:27:00.238 "raid_level": "raid1", 00:27:00.238 "superblock": false, 00:27:00.238 "num_base_bdevs": 2, 00:27:00.238 "num_base_bdevs_discovered": 1, 00:27:00.238 "num_base_bdevs_operational": 2, 00:27:00.238 "base_bdevs_list": [ 00:27:00.238 { 00:27:00.238 "name": "BaseBdev1", 00:27:00.238 "uuid": "0765d87c-9283-429d-a38c-ec0d41cf0f5d", 00:27:00.238 "is_configured": true, 00:27:00.238 "data_offset": 0, 00:27:00.238 "data_size": 65536 00:27:00.238 }, 00:27:00.238 { 00:27:00.238 "name": "BaseBdev2", 00:27:00.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.238 "is_configured": false, 00:27:00.238 "data_offset": 0, 00:27:00.238 "data_size": 0 00:27:00.238 } 00:27:00.238 ] 00:27:00.238 }' 00:27:00.238 18:25:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:00.238 18:25:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.497 [2024-12-06 18:25:31.363816] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:00.497 [2024-12-06 18:25:31.363883] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.497 [2024-12-06 18:25:31.375854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:00.497 [2024-12-06 18:25:31.378133] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:00.497 [2024-12-06 18:25:31.378208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.497 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:00.497 "name": "Existed_Raid", 00:27:00.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.497 "strip_size_kb": 0, 00:27:00.497 "state": "configuring", 00:27:00.497 "raid_level": "raid1", 00:27:00.497 "superblock": false, 00:27:00.497 "num_base_bdevs": 2, 00:27:00.497 "num_base_bdevs_discovered": 1, 00:27:00.498 "num_base_bdevs_operational": 2, 00:27:00.498 "base_bdevs_list": [ 00:27:00.498 { 00:27:00.498 "name": "BaseBdev1", 00:27:00.498 "uuid": "0765d87c-9283-429d-a38c-ec0d41cf0f5d", 00:27:00.498 "is_configured": true, 00:27:00.498 "data_offset": 0, 00:27:00.498 "data_size": 65536 00:27:00.498 }, 00:27:00.498 { 00:27:00.498 "name": "BaseBdev2", 00:27:00.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:00.498 "is_configured": false, 00:27:00.498 "data_offset": 0, 00:27:00.498 "data_size": 0 00:27:00.498 } 00:27:00.498 ] 00:27:00.498 }' 00:27:00.498 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:00.498 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.064 [2024-12-06 18:25:31.844422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:01.064 [2024-12-06 18:25:31.844487] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:01.064 [2024-12-06 18:25:31.844497] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:27:01.064 [2024-12-06 18:25:31.844815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:01.064 [2024-12-06 18:25:31.845008] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:01.064 [2024-12-06 18:25:31.845024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:01.064 [2024-12-06 18:25:31.845336] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:01.064 BaseBdev2 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.064 [ 00:27:01.064 { 00:27:01.064 "name": "BaseBdev2", 00:27:01.064 "aliases": [ 00:27:01.064 "8205c290-da14-49c2-8cbe-1f6d30239475" 00:27:01.064 ], 00:27:01.064 "product_name": "Malloc disk", 00:27:01.064 "block_size": 512, 00:27:01.064 "num_blocks": 65536, 00:27:01.064 "uuid": "8205c290-da14-49c2-8cbe-1f6d30239475", 00:27:01.064 "assigned_rate_limits": { 00:27:01.064 "rw_ios_per_sec": 0, 00:27:01.064 "rw_mbytes_per_sec": 0, 00:27:01.064 "r_mbytes_per_sec": 0, 00:27:01.064 "w_mbytes_per_sec": 0 00:27:01.064 }, 00:27:01.064 "claimed": true, 00:27:01.064 "claim_type": "exclusive_write", 00:27:01.064 "zoned": false, 00:27:01.064 "supported_io_types": { 00:27:01.064 "read": true, 00:27:01.064 "write": true, 00:27:01.064 "unmap": true, 00:27:01.064 "flush": true, 00:27:01.064 "reset": true, 00:27:01.064 "nvme_admin": false, 00:27:01.064 "nvme_io": false, 00:27:01.064 "nvme_io_md": false, 00:27:01.064 "write_zeroes": true, 00:27:01.064 "zcopy": true, 00:27:01.064 "get_zone_info": false, 00:27:01.064 "zone_management": false, 00:27:01.064 "zone_append": false, 00:27:01.064 "compare": false, 00:27:01.064 "compare_and_write": false, 00:27:01.064 "abort": true, 00:27:01.064 "seek_hole": false, 00:27:01.064 "seek_data": false, 00:27:01.064 "copy": true, 00:27:01.064 "nvme_iov_md": false 00:27:01.064 }, 00:27:01.064 "memory_domains": [ 00:27:01.064 { 00:27:01.064 "dma_device_id": "system", 00:27:01.064 "dma_device_type": 1 00:27:01.064 }, 00:27:01.064 { 00:27:01.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:01.064 "dma_device_type": 2 00:27:01.064 } 00:27:01.064 ], 00:27:01.064 "driver_specific": {} 00:27:01.064 } 00:27:01.064 ] 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.064 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:01.065 "name": "Existed_Raid", 00:27:01.065 "uuid": "d6f184ab-6bba-4eec-a7fb-b5da45dc9987", 00:27:01.065 "strip_size_kb": 0, 00:27:01.065 "state": "online", 00:27:01.065 "raid_level": "raid1", 00:27:01.065 "superblock": false, 00:27:01.065 "num_base_bdevs": 2, 00:27:01.065 "num_base_bdevs_discovered": 2, 00:27:01.065 "num_base_bdevs_operational": 2, 00:27:01.065 "base_bdevs_list": [ 00:27:01.065 { 00:27:01.065 "name": "BaseBdev1", 00:27:01.065 "uuid": "0765d87c-9283-429d-a38c-ec0d41cf0f5d", 00:27:01.065 "is_configured": true, 00:27:01.065 "data_offset": 0, 00:27:01.065 "data_size": 65536 00:27:01.065 }, 00:27:01.065 { 00:27:01.065 "name": "BaseBdev2", 00:27:01.065 "uuid": "8205c290-da14-49c2-8cbe-1f6d30239475", 00:27:01.065 "is_configured": true, 00:27:01.065 "data_offset": 0, 00:27:01.065 "data_size": 65536 00:27:01.065 } 00:27:01.065 ] 00:27:01.065 }' 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:01.065 18:25:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:01.686 [2024-12-06 18:25:32.340098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.686 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:01.686 "name": "Existed_Raid", 00:27:01.686 "aliases": [ 00:27:01.686 "d6f184ab-6bba-4eec-a7fb-b5da45dc9987" 00:27:01.686 ], 00:27:01.686 "product_name": "Raid Volume", 00:27:01.686 "block_size": 512, 00:27:01.686 "num_blocks": 65536, 00:27:01.686 "uuid": "d6f184ab-6bba-4eec-a7fb-b5da45dc9987", 00:27:01.686 "assigned_rate_limits": { 00:27:01.686 "rw_ios_per_sec": 0, 00:27:01.686 "rw_mbytes_per_sec": 0, 00:27:01.686 "r_mbytes_per_sec": 0, 00:27:01.686 "w_mbytes_per_sec": 0 00:27:01.686 }, 00:27:01.686 "claimed": false, 00:27:01.686 "zoned": false, 00:27:01.686 "supported_io_types": { 00:27:01.686 "read": true, 00:27:01.686 "write": true, 00:27:01.686 "unmap": false, 00:27:01.686 "flush": false, 00:27:01.686 "reset": true, 00:27:01.686 "nvme_admin": false, 00:27:01.686 "nvme_io": false, 00:27:01.686 "nvme_io_md": false, 00:27:01.686 "write_zeroes": true, 00:27:01.686 "zcopy": false, 00:27:01.686 "get_zone_info": false, 00:27:01.686 "zone_management": false, 00:27:01.686 "zone_append": false, 00:27:01.686 "compare": false, 00:27:01.686 "compare_and_write": false, 00:27:01.686 "abort": false, 00:27:01.686 "seek_hole": false, 00:27:01.686 "seek_data": false, 00:27:01.686 "copy": false, 00:27:01.686 "nvme_iov_md": false 00:27:01.686 }, 00:27:01.686 "memory_domains": [ 00:27:01.686 { 00:27:01.686 "dma_device_id": "system", 00:27:01.686 "dma_device_type": 1 00:27:01.686 }, 00:27:01.686 { 00:27:01.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:01.686 "dma_device_type": 2 00:27:01.686 }, 00:27:01.686 { 00:27:01.686 "dma_device_id": "system", 00:27:01.686 "dma_device_type": 1 00:27:01.686 }, 00:27:01.686 { 00:27:01.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:01.686 "dma_device_type": 2 00:27:01.686 } 00:27:01.686 ], 00:27:01.686 "driver_specific": { 00:27:01.686 "raid": { 00:27:01.686 "uuid": "d6f184ab-6bba-4eec-a7fb-b5da45dc9987", 00:27:01.686 "strip_size_kb": 0, 00:27:01.686 "state": "online", 00:27:01.686 "raid_level": "raid1", 00:27:01.686 "superblock": false, 00:27:01.686 "num_base_bdevs": 2, 00:27:01.686 "num_base_bdevs_discovered": 2, 00:27:01.686 "num_base_bdevs_operational": 2, 00:27:01.686 "base_bdevs_list": [ 00:27:01.686 { 00:27:01.686 "name": "BaseBdev1", 00:27:01.686 "uuid": "0765d87c-9283-429d-a38c-ec0d41cf0f5d", 00:27:01.686 "is_configured": true, 00:27:01.686 "data_offset": 0, 00:27:01.686 "data_size": 65536 00:27:01.686 }, 00:27:01.686 { 00:27:01.686 "name": "BaseBdev2", 00:27:01.686 "uuid": "8205c290-da14-49c2-8cbe-1f6d30239475", 00:27:01.686 "is_configured": true, 00:27:01.686 "data_offset": 0, 00:27:01.686 "data_size": 65536 00:27:01.686 } 00:27:01.686 ] 00:27:01.686 } 00:27:01.686 } 00:27:01.686 }' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:01.687 BaseBdev2' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.687 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.687 [2024-12-06 18:25:32.579573] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:01.952 "name": "Existed_Raid", 00:27:01.952 "uuid": "d6f184ab-6bba-4eec-a7fb-b5da45dc9987", 00:27:01.952 "strip_size_kb": 0, 00:27:01.952 "state": "online", 00:27:01.952 "raid_level": "raid1", 00:27:01.952 "superblock": false, 00:27:01.952 "num_base_bdevs": 2, 00:27:01.952 "num_base_bdevs_discovered": 1, 00:27:01.952 "num_base_bdevs_operational": 1, 00:27:01.952 "base_bdevs_list": [ 00:27:01.952 { 00:27:01.952 "name": null, 00:27:01.952 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:01.952 "is_configured": false, 00:27:01.952 "data_offset": 0, 00:27:01.952 "data_size": 65536 00:27:01.952 }, 00:27:01.952 { 00:27:01.952 "name": "BaseBdev2", 00:27:01.952 "uuid": "8205c290-da14-49c2-8cbe-1f6d30239475", 00:27:01.952 "is_configured": true, 00:27:01.952 "data_offset": 0, 00:27:01.952 "data_size": 65536 00:27:01.952 } 00:27:01.952 ] 00:27:01.952 }' 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:01.952 18:25:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.211 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.211 [2024-12-06 18:25:33.158098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:02.211 [2024-12-06 18:25:33.158212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:02.471 [2024-12-06 18:25:33.253884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:02.471 [2024-12-06 18:25:33.253945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:02.471 [2024-12-06 18:25:33.253961] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62444 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62444 ']' 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62444 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62444 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:02.471 killing process with pid 62444 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62444' 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62444 00:27:02.471 [2024-12-06 18:25:33.346692] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:02.471 18:25:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62444 00:27:02.471 [2024-12-06 18:25:33.363467] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:27:03.846 00:27:03.846 real 0m5.103s 00:27:03.846 user 0m7.309s 00:27:03.846 sys 0m0.964s 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:03.846 ************************************ 00:27:03.846 END TEST raid_state_function_test 00:27:03.846 ************************************ 00:27:03.846 18:25:34 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:27:03.846 18:25:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:03.846 18:25:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:03.846 18:25:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:03.846 ************************************ 00:27:03.846 START TEST raid_state_function_test_sb 00:27:03.846 ************************************ 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:03.846 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62697 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62697' 00:27:03.847 Process raid pid: 62697 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62697 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62697 ']' 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:03.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:03.847 18:25:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:03.847 [2024-12-06 18:25:34.683924] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:03.847 [2024-12-06 18:25:34.684057] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.105 [2024-12-06 18:25:34.872957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.105 [2024-12-06 18:25:34.992569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:04.363 [2024-12-06 18:25:35.188908] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:04.363 [2024-12-06 18:25:35.188956] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.621 [2024-12-06 18:25:35.524649] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:04.621 [2024-12-06 18:25:35.524710] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:04.621 [2024-12-06 18:25:35.524722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:04.621 [2024-12-06 18:25:35.524735] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:04.621 "name": "Existed_Raid", 00:27:04.621 "uuid": "2ec4500f-39e6-4998-a678-de2dd1c44081", 00:27:04.621 "strip_size_kb": 0, 00:27:04.621 "state": "configuring", 00:27:04.621 "raid_level": "raid1", 00:27:04.621 "superblock": true, 00:27:04.621 "num_base_bdevs": 2, 00:27:04.621 "num_base_bdevs_discovered": 0, 00:27:04.621 "num_base_bdevs_operational": 2, 00:27:04.621 "base_bdevs_list": [ 00:27:04.621 { 00:27:04.621 "name": "BaseBdev1", 00:27:04.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.621 "is_configured": false, 00:27:04.621 "data_offset": 0, 00:27:04.621 "data_size": 0 00:27:04.621 }, 00:27:04.621 { 00:27:04.621 "name": "BaseBdev2", 00:27:04.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:04.621 "is_configured": false, 00:27:04.621 "data_offset": 0, 00:27:04.621 "data_size": 0 00:27:04.621 } 00:27:04.621 ] 00:27:04.621 }' 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:04.621 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.186 [2024-12-06 18:25:35.920248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:05.186 [2024-12-06 18:25:35.920291] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.186 [2024-12-06 18:25:35.932229] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:05.186 [2024-12-06 18:25:35.932274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:05.186 [2024-12-06 18:25:35.932285] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:05.186 [2024-12-06 18:25:35.932301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.186 [2024-12-06 18:25:35.981993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:05.186 BaseBdev1 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.186 18:25:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.186 [ 00:27:05.186 { 00:27:05.186 "name": "BaseBdev1", 00:27:05.186 "aliases": [ 00:27:05.186 "4e66a2bd-196f-4fb5-a85f-768b8a4cc6bc" 00:27:05.186 ], 00:27:05.186 "product_name": "Malloc disk", 00:27:05.186 "block_size": 512, 00:27:05.186 "num_blocks": 65536, 00:27:05.186 "uuid": "4e66a2bd-196f-4fb5-a85f-768b8a4cc6bc", 00:27:05.186 "assigned_rate_limits": { 00:27:05.186 "rw_ios_per_sec": 0, 00:27:05.186 "rw_mbytes_per_sec": 0, 00:27:05.186 "r_mbytes_per_sec": 0, 00:27:05.186 "w_mbytes_per_sec": 0 00:27:05.186 }, 00:27:05.186 "claimed": true, 00:27:05.186 "claim_type": "exclusive_write", 00:27:05.186 "zoned": false, 00:27:05.186 "supported_io_types": { 00:27:05.186 "read": true, 00:27:05.186 "write": true, 00:27:05.186 "unmap": true, 00:27:05.186 "flush": true, 00:27:05.186 "reset": true, 00:27:05.186 "nvme_admin": false, 00:27:05.186 "nvme_io": false, 00:27:05.186 "nvme_io_md": false, 00:27:05.186 "write_zeroes": true, 00:27:05.186 "zcopy": true, 00:27:05.186 "get_zone_info": false, 00:27:05.186 "zone_management": false, 00:27:05.186 "zone_append": false, 00:27:05.186 "compare": false, 00:27:05.186 "compare_and_write": false, 00:27:05.186 "abort": true, 00:27:05.186 "seek_hole": false, 00:27:05.186 "seek_data": false, 00:27:05.186 "copy": true, 00:27:05.186 "nvme_iov_md": false 00:27:05.186 }, 00:27:05.186 "memory_domains": [ 00:27:05.186 { 00:27:05.186 "dma_device_id": "system", 00:27:05.186 "dma_device_type": 1 00:27:05.186 }, 00:27:05.186 { 00:27:05.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:05.186 "dma_device_type": 2 00:27:05.186 } 00:27:05.186 ], 00:27:05.186 "driver_specific": {} 00:27:05.186 } 00:27:05.186 ] 00:27:05.186 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.186 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:05.186 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:05.186 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:05.186 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:05.186 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:05.186 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:05.186 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:05.187 "name": "Existed_Raid", 00:27:05.187 "uuid": "43d77438-e848-4da2-a1bc-b73cedfa2999", 00:27:05.187 "strip_size_kb": 0, 00:27:05.187 "state": "configuring", 00:27:05.187 "raid_level": "raid1", 00:27:05.187 "superblock": true, 00:27:05.187 "num_base_bdevs": 2, 00:27:05.187 "num_base_bdevs_discovered": 1, 00:27:05.187 "num_base_bdevs_operational": 2, 00:27:05.187 "base_bdevs_list": [ 00:27:05.187 { 00:27:05.187 "name": "BaseBdev1", 00:27:05.187 "uuid": "4e66a2bd-196f-4fb5-a85f-768b8a4cc6bc", 00:27:05.187 "is_configured": true, 00:27:05.187 "data_offset": 2048, 00:27:05.187 "data_size": 63488 00:27:05.187 }, 00:27:05.187 { 00:27:05.187 "name": "BaseBdev2", 00:27:05.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.187 "is_configured": false, 00:27:05.187 "data_offset": 0, 00:27:05.187 "data_size": 0 00:27:05.187 } 00:27:05.187 ] 00:27:05.187 }' 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:05.187 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.753 [2024-12-06 18:25:36.449609] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:05.753 [2024-12-06 18:25:36.449664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.753 [2024-12-06 18:25:36.457657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:05.753 [2024-12-06 18:25:36.459851] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:05.753 [2024-12-06 18:25:36.460005] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:05.753 "name": "Existed_Raid", 00:27:05.753 "uuid": "bda45fc4-b8c3-4826-8ce9-b1aa1f208590", 00:27:05.753 "strip_size_kb": 0, 00:27:05.753 "state": "configuring", 00:27:05.753 "raid_level": "raid1", 00:27:05.753 "superblock": true, 00:27:05.753 "num_base_bdevs": 2, 00:27:05.753 "num_base_bdevs_discovered": 1, 00:27:05.753 "num_base_bdevs_operational": 2, 00:27:05.753 "base_bdevs_list": [ 00:27:05.753 { 00:27:05.753 "name": "BaseBdev1", 00:27:05.753 "uuid": "4e66a2bd-196f-4fb5-a85f-768b8a4cc6bc", 00:27:05.753 "is_configured": true, 00:27:05.753 "data_offset": 2048, 00:27:05.753 "data_size": 63488 00:27:05.753 }, 00:27:05.753 { 00:27:05.753 "name": "BaseBdev2", 00:27:05.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:05.753 "is_configured": false, 00:27:05.753 "data_offset": 0, 00:27:05.753 "data_size": 0 00:27:05.753 } 00:27:05.753 ] 00:27:05.753 }' 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:05.753 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.011 [2024-12-06 18:25:36.928226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:06.011 [2024-12-06 18:25:36.928512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:06.011 [2024-12-06 18:25:36.928535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:06.011 BaseBdev2 00:27:06.011 [2024-12-06 18:25:36.928805] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:06.011 [2024-12-06 18:25:36.928962] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:06.011 [2024-12-06 18:25:36.928979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:06.011 [2024-12-06 18:25:36.929122] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.011 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.011 [ 00:27:06.011 { 00:27:06.011 "name": "BaseBdev2", 00:27:06.011 "aliases": [ 00:27:06.011 "f4c694ca-9c87-4380-8ffa-4fa01e186f4e" 00:27:06.011 ], 00:27:06.011 "product_name": "Malloc disk", 00:27:06.011 "block_size": 512, 00:27:06.011 "num_blocks": 65536, 00:27:06.011 "uuid": "f4c694ca-9c87-4380-8ffa-4fa01e186f4e", 00:27:06.011 "assigned_rate_limits": { 00:27:06.011 "rw_ios_per_sec": 0, 00:27:06.270 "rw_mbytes_per_sec": 0, 00:27:06.270 "r_mbytes_per_sec": 0, 00:27:06.270 "w_mbytes_per_sec": 0 00:27:06.270 }, 00:27:06.270 "claimed": true, 00:27:06.270 "claim_type": "exclusive_write", 00:27:06.270 "zoned": false, 00:27:06.270 "supported_io_types": { 00:27:06.270 "read": true, 00:27:06.270 "write": true, 00:27:06.270 "unmap": true, 00:27:06.270 "flush": true, 00:27:06.270 "reset": true, 00:27:06.270 "nvme_admin": false, 00:27:06.270 "nvme_io": false, 00:27:06.270 "nvme_io_md": false, 00:27:06.270 "write_zeroes": true, 00:27:06.270 "zcopy": true, 00:27:06.270 "get_zone_info": false, 00:27:06.270 "zone_management": false, 00:27:06.270 "zone_append": false, 00:27:06.270 "compare": false, 00:27:06.270 "compare_and_write": false, 00:27:06.270 "abort": true, 00:27:06.270 "seek_hole": false, 00:27:06.270 "seek_data": false, 00:27:06.270 "copy": true, 00:27:06.270 "nvme_iov_md": false 00:27:06.270 }, 00:27:06.270 "memory_domains": [ 00:27:06.270 { 00:27:06.270 "dma_device_id": "system", 00:27:06.270 "dma_device_type": 1 00:27:06.270 }, 00:27:06.270 { 00:27:06.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.270 "dma_device_type": 2 00:27:06.270 } 00:27:06.270 ], 00:27:06.270 "driver_specific": {} 00:27:06.270 } 00:27:06.270 ] 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.270 18:25:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.270 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:06.270 "name": "Existed_Raid", 00:27:06.270 "uuid": "bda45fc4-b8c3-4826-8ce9-b1aa1f208590", 00:27:06.270 "strip_size_kb": 0, 00:27:06.270 "state": "online", 00:27:06.270 "raid_level": "raid1", 00:27:06.270 "superblock": true, 00:27:06.270 "num_base_bdevs": 2, 00:27:06.270 "num_base_bdevs_discovered": 2, 00:27:06.270 "num_base_bdevs_operational": 2, 00:27:06.270 "base_bdevs_list": [ 00:27:06.270 { 00:27:06.270 "name": "BaseBdev1", 00:27:06.270 "uuid": "4e66a2bd-196f-4fb5-a85f-768b8a4cc6bc", 00:27:06.270 "is_configured": true, 00:27:06.270 "data_offset": 2048, 00:27:06.270 "data_size": 63488 00:27:06.270 }, 00:27:06.270 { 00:27:06.270 "name": "BaseBdev2", 00:27:06.270 "uuid": "f4c694ca-9c87-4380-8ffa-4fa01e186f4e", 00:27:06.270 "is_configured": true, 00:27:06.270 "data_offset": 2048, 00:27:06.270 "data_size": 63488 00:27:06.270 } 00:27:06.270 ] 00:27:06.270 }' 00:27:06.270 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:06.270 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.528 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:06.529 [2024-12-06 18:25:37.419891] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:06.529 "name": "Existed_Raid", 00:27:06.529 "aliases": [ 00:27:06.529 "bda45fc4-b8c3-4826-8ce9-b1aa1f208590" 00:27:06.529 ], 00:27:06.529 "product_name": "Raid Volume", 00:27:06.529 "block_size": 512, 00:27:06.529 "num_blocks": 63488, 00:27:06.529 "uuid": "bda45fc4-b8c3-4826-8ce9-b1aa1f208590", 00:27:06.529 "assigned_rate_limits": { 00:27:06.529 "rw_ios_per_sec": 0, 00:27:06.529 "rw_mbytes_per_sec": 0, 00:27:06.529 "r_mbytes_per_sec": 0, 00:27:06.529 "w_mbytes_per_sec": 0 00:27:06.529 }, 00:27:06.529 "claimed": false, 00:27:06.529 "zoned": false, 00:27:06.529 "supported_io_types": { 00:27:06.529 "read": true, 00:27:06.529 "write": true, 00:27:06.529 "unmap": false, 00:27:06.529 "flush": false, 00:27:06.529 "reset": true, 00:27:06.529 "nvme_admin": false, 00:27:06.529 "nvme_io": false, 00:27:06.529 "nvme_io_md": false, 00:27:06.529 "write_zeroes": true, 00:27:06.529 "zcopy": false, 00:27:06.529 "get_zone_info": false, 00:27:06.529 "zone_management": false, 00:27:06.529 "zone_append": false, 00:27:06.529 "compare": false, 00:27:06.529 "compare_and_write": false, 00:27:06.529 "abort": false, 00:27:06.529 "seek_hole": false, 00:27:06.529 "seek_data": false, 00:27:06.529 "copy": false, 00:27:06.529 "nvme_iov_md": false 00:27:06.529 }, 00:27:06.529 "memory_domains": [ 00:27:06.529 { 00:27:06.529 "dma_device_id": "system", 00:27:06.529 "dma_device_type": 1 00:27:06.529 }, 00:27:06.529 { 00:27:06.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.529 "dma_device_type": 2 00:27:06.529 }, 00:27:06.529 { 00:27:06.529 "dma_device_id": "system", 00:27:06.529 "dma_device_type": 1 00:27:06.529 }, 00:27:06.529 { 00:27:06.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:06.529 "dma_device_type": 2 00:27:06.529 } 00:27:06.529 ], 00:27:06.529 "driver_specific": { 00:27:06.529 "raid": { 00:27:06.529 "uuid": "bda45fc4-b8c3-4826-8ce9-b1aa1f208590", 00:27:06.529 "strip_size_kb": 0, 00:27:06.529 "state": "online", 00:27:06.529 "raid_level": "raid1", 00:27:06.529 "superblock": true, 00:27:06.529 "num_base_bdevs": 2, 00:27:06.529 "num_base_bdevs_discovered": 2, 00:27:06.529 "num_base_bdevs_operational": 2, 00:27:06.529 "base_bdevs_list": [ 00:27:06.529 { 00:27:06.529 "name": "BaseBdev1", 00:27:06.529 "uuid": "4e66a2bd-196f-4fb5-a85f-768b8a4cc6bc", 00:27:06.529 "is_configured": true, 00:27:06.529 "data_offset": 2048, 00:27:06.529 "data_size": 63488 00:27:06.529 }, 00:27:06.529 { 00:27:06.529 "name": "BaseBdev2", 00:27:06.529 "uuid": "f4c694ca-9c87-4380-8ffa-4fa01e186f4e", 00:27:06.529 "is_configured": true, 00:27:06.529 "data_offset": 2048, 00:27:06.529 "data_size": 63488 00:27:06.529 } 00:27:06.529 ] 00:27:06.529 } 00:27:06.529 } 00:27:06.529 }' 00:27:06.529 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:06.823 BaseBdev2' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:06.823 [2024-12-06 18:25:37.651343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:06.823 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.081 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.081 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:07.081 "name": "Existed_Raid", 00:27:07.081 "uuid": "bda45fc4-b8c3-4826-8ce9-b1aa1f208590", 00:27:07.081 "strip_size_kb": 0, 00:27:07.081 "state": "online", 00:27:07.081 "raid_level": "raid1", 00:27:07.081 "superblock": true, 00:27:07.081 "num_base_bdevs": 2, 00:27:07.081 "num_base_bdevs_discovered": 1, 00:27:07.081 "num_base_bdevs_operational": 1, 00:27:07.081 "base_bdevs_list": [ 00:27:07.081 { 00:27:07.081 "name": null, 00:27:07.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:07.081 "is_configured": false, 00:27:07.081 "data_offset": 0, 00:27:07.081 "data_size": 63488 00:27:07.081 }, 00:27:07.081 { 00:27:07.081 "name": "BaseBdev2", 00:27:07.081 "uuid": "f4c694ca-9c87-4380-8ffa-4fa01e186f4e", 00:27:07.081 "is_configured": true, 00:27:07.081 "data_offset": 2048, 00:27:07.081 "data_size": 63488 00:27:07.081 } 00:27:07.081 ] 00:27:07.081 }' 00:27:07.081 18:25:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:07.081 18:25:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.340 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.340 [2024-12-06 18:25:38.265663] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:07.340 [2024-12-06 18:25:38.265919] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:07.599 [2024-12-06 18:25:38.364413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:07.599 [2024-12-06 18:25:38.364696] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:07.599 [2024-12-06 18:25:38.364725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62697 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62697 ']' 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62697 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62697 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:07.599 killing process with pid 62697 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62697' 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62697 00:27:07.599 [2024-12-06 18:25:38.460667] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:07.599 18:25:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62697 00:27:07.599 [2024-12-06 18:25:38.477704] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:08.974 18:25:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:27:08.974 ************************************ 00:27:08.974 00:27:08.974 real 0m5.061s 00:27:08.974 user 0m7.229s 00:27:08.974 sys 0m0.924s 00:27:08.974 18:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:08.974 18:25:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:08.974 END TEST raid_state_function_test_sb 00:27:08.974 ************************************ 00:27:08.974 18:25:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:27:08.974 18:25:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:08.974 18:25:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:08.974 18:25:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:08.974 ************************************ 00:27:08.974 START TEST raid_superblock_test 00:27:08.974 ************************************ 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62945 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62945 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62945 ']' 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.974 18:25:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:08.974 [2024-12-06 18:25:39.814032] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:08.974 [2024-12-06 18:25:39.814188] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62945 ] 00:27:09.233 [2024-12-06 18:25:39.999631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.233 [2024-12-06 18:25:40.119223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.491 [2024-12-06 18:25:40.326927] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:09.491 [2024-12-06 18:25:40.326992] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.750 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.009 malloc1 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.009 [2024-12-06 18:25:40.739693] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:10.009 [2024-12-06 18:25:40.739759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.009 [2024-12-06 18:25:40.739784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:10.009 [2024-12-06 18:25:40.739797] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.009 [2024-12-06 18:25:40.742465] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.009 [2024-12-06 18:25:40.742671] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:10.009 pt1 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.009 malloc2 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.009 [2024-12-06 18:25:40.797930] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:10.009 [2024-12-06 18:25:40.798000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:10.009 [2024-12-06 18:25:40.798032] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:10.009 [2024-12-06 18:25:40.798044] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:10.009 [2024-12-06 18:25:40.800677] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:10.009 [2024-12-06 18:25:40.800721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:10.009 pt2 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.009 [2024-12-06 18:25:40.809991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:10.009 [2024-12-06 18:25:40.812210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:10.009 [2024-12-06 18:25:40.812385] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:10.009 [2024-12-06 18:25:40.812405] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:10.009 [2024-12-06 18:25:40.812714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:27:10.009 [2024-12-06 18:25:40.812901] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:10.009 [2024-12-06 18:25:40.812919] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:10.009 [2024-12-06 18:25:40.813115] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.009 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:10.010 "name": "raid_bdev1", 00:27:10.010 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:10.010 "strip_size_kb": 0, 00:27:10.010 "state": "online", 00:27:10.010 "raid_level": "raid1", 00:27:10.010 "superblock": true, 00:27:10.010 "num_base_bdevs": 2, 00:27:10.010 "num_base_bdevs_discovered": 2, 00:27:10.010 "num_base_bdevs_operational": 2, 00:27:10.010 "base_bdevs_list": [ 00:27:10.010 { 00:27:10.010 "name": "pt1", 00:27:10.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:10.010 "is_configured": true, 00:27:10.010 "data_offset": 2048, 00:27:10.010 "data_size": 63488 00:27:10.010 }, 00:27:10.010 { 00:27:10.010 "name": "pt2", 00:27:10.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:10.010 "is_configured": true, 00:27:10.010 "data_offset": 2048, 00:27:10.010 "data_size": 63488 00:27:10.010 } 00:27:10.010 ] 00:27:10.010 }' 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:10.010 18:25:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.577 [2024-12-06 18:25:41.293958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.577 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:10.577 "name": "raid_bdev1", 00:27:10.577 "aliases": [ 00:27:10.577 "e4f75b50-66b3-4b71-bf13-84163eabc663" 00:27:10.577 ], 00:27:10.577 "product_name": "Raid Volume", 00:27:10.577 "block_size": 512, 00:27:10.577 "num_blocks": 63488, 00:27:10.577 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:10.577 "assigned_rate_limits": { 00:27:10.577 "rw_ios_per_sec": 0, 00:27:10.577 "rw_mbytes_per_sec": 0, 00:27:10.577 "r_mbytes_per_sec": 0, 00:27:10.577 "w_mbytes_per_sec": 0 00:27:10.577 }, 00:27:10.577 "claimed": false, 00:27:10.577 "zoned": false, 00:27:10.577 "supported_io_types": { 00:27:10.577 "read": true, 00:27:10.577 "write": true, 00:27:10.577 "unmap": false, 00:27:10.577 "flush": false, 00:27:10.577 "reset": true, 00:27:10.577 "nvme_admin": false, 00:27:10.577 "nvme_io": false, 00:27:10.577 "nvme_io_md": false, 00:27:10.577 "write_zeroes": true, 00:27:10.577 "zcopy": false, 00:27:10.577 "get_zone_info": false, 00:27:10.577 "zone_management": false, 00:27:10.577 "zone_append": false, 00:27:10.577 "compare": false, 00:27:10.577 "compare_and_write": false, 00:27:10.577 "abort": false, 00:27:10.577 "seek_hole": false, 00:27:10.577 "seek_data": false, 00:27:10.577 "copy": false, 00:27:10.577 "nvme_iov_md": false 00:27:10.577 }, 00:27:10.577 "memory_domains": [ 00:27:10.577 { 00:27:10.577 "dma_device_id": "system", 00:27:10.577 "dma_device_type": 1 00:27:10.577 }, 00:27:10.577 { 00:27:10.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.577 "dma_device_type": 2 00:27:10.577 }, 00:27:10.577 { 00:27:10.577 "dma_device_id": "system", 00:27:10.577 "dma_device_type": 1 00:27:10.577 }, 00:27:10.577 { 00:27:10.577 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:10.577 "dma_device_type": 2 00:27:10.577 } 00:27:10.577 ], 00:27:10.577 "driver_specific": { 00:27:10.577 "raid": { 00:27:10.577 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:10.577 "strip_size_kb": 0, 00:27:10.577 "state": "online", 00:27:10.577 "raid_level": "raid1", 00:27:10.577 "superblock": true, 00:27:10.577 "num_base_bdevs": 2, 00:27:10.577 "num_base_bdevs_discovered": 2, 00:27:10.577 "num_base_bdevs_operational": 2, 00:27:10.577 "base_bdevs_list": [ 00:27:10.577 { 00:27:10.577 "name": "pt1", 00:27:10.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:10.577 "is_configured": true, 00:27:10.577 "data_offset": 2048, 00:27:10.577 "data_size": 63488 00:27:10.577 }, 00:27:10.577 { 00:27:10.577 "name": "pt2", 00:27:10.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:10.577 "is_configured": true, 00:27:10.577 "data_offset": 2048, 00:27:10.577 "data_size": 63488 00:27:10.577 } 00:27:10.577 ] 00:27:10.577 } 00:27:10.577 } 00:27:10.577 }' 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:10.578 pt2' 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.578 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.836 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:10.836 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:10.837 [2024-12-06 18:25:41.545944] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e4f75b50-66b3-4b71-bf13-84163eabc663 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e4f75b50-66b3-4b71-bf13-84163eabc663 ']' 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 [2024-12-06 18:25:41.597628] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:10.837 [2024-12-06 18:25:41.597800] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:10.837 [2024-12-06 18:25:41.597916] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:10.837 [2024-12-06 18:25:41.597980] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:10.837 [2024-12-06 18:25:41.597996] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 [2024-12-06 18:25:41.733678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:10.837 [2024-12-06 18:25:41.735917] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:10.837 [2024-12-06 18:25:41.735981] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:10.837 [2024-12-06 18:25:41.736042] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:10.837 [2024-12-06 18:25:41.736060] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:10.837 [2024-12-06 18:25:41.736073] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:10.837 request: 00:27:10.837 { 00:27:10.837 "name": "raid_bdev1", 00:27:10.837 "raid_level": "raid1", 00:27:10.837 "base_bdevs": [ 00:27:10.837 "malloc1", 00:27:10.837 "malloc2" 00:27:10.837 ], 00:27:10.837 "superblock": false, 00:27:10.837 "method": "bdev_raid_create", 00:27:10.837 "req_id": 1 00:27:10.837 } 00:27:10.837 Got JSON-RPC error response 00:27:10.837 response: 00:27:10.837 { 00:27:10.837 "code": -17, 00:27:10.837 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:10.837 } 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:10.837 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.096 [2024-12-06 18:25:41.793660] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:11.096 [2024-12-06 18:25:41.793729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.096 [2024-12-06 18:25:41.793771] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:11.096 [2024-12-06 18:25:41.793786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.096 [2024-12-06 18:25:41.796469] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.096 [2024-12-06 18:25:41.796514] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:11.096 [2024-12-06 18:25:41.796604] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:11.096 [2024-12-06 18:25:41.796662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:11.096 pt1 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:11.096 "name": "raid_bdev1", 00:27:11.096 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:11.096 "strip_size_kb": 0, 00:27:11.096 "state": "configuring", 00:27:11.096 "raid_level": "raid1", 00:27:11.096 "superblock": true, 00:27:11.096 "num_base_bdevs": 2, 00:27:11.096 "num_base_bdevs_discovered": 1, 00:27:11.096 "num_base_bdevs_operational": 2, 00:27:11.096 "base_bdevs_list": [ 00:27:11.096 { 00:27:11.096 "name": "pt1", 00:27:11.096 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:11.096 "is_configured": true, 00:27:11.096 "data_offset": 2048, 00:27:11.096 "data_size": 63488 00:27:11.096 }, 00:27:11.096 { 00:27:11.096 "name": null, 00:27:11.096 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:11.096 "is_configured": false, 00:27:11.096 "data_offset": 2048, 00:27:11.096 "data_size": 63488 00:27:11.096 } 00:27:11.096 ] 00:27:11.096 }' 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:11.096 18:25:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.355 [2024-12-06 18:25:42.237668] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:11.355 [2024-12-06 18:25:42.237754] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:11.355 [2024-12-06 18:25:42.237780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:27:11.355 [2024-12-06 18:25:42.237795] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:11.355 [2024-12-06 18:25:42.238290] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:11.355 [2024-12-06 18:25:42.238315] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:11.355 [2024-12-06 18:25:42.238400] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:11.355 [2024-12-06 18:25:42.238432] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:11.355 [2024-12-06 18:25:42.238546] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:11.355 [2024-12-06 18:25:42.238560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:11.355 [2024-12-06 18:25:42.238846] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:11.355 [2024-12-06 18:25:42.238985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:11.355 [2024-12-06 18:25:42.238995] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:11.355 [2024-12-06 18:25:42.239138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:11.355 pt2 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:11.355 "name": "raid_bdev1", 00:27:11.355 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:11.355 "strip_size_kb": 0, 00:27:11.355 "state": "online", 00:27:11.355 "raid_level": "raid1", 00:27:11.355 "superblock": true, 00:27:11.355 "num_base_bdevs": 2, 00:27:11.355 "num_base_bdevs_discovered": 2, 00:27:11.355 "num_base_bdevs_operational": 2, 00:27:11.355 "base_bdevs_list": [ 00:27:11.355 { 00:27:11.355 "name": "pt1", 00:27:11.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:11.355 "is_configured": true, 00:27:11.355 "data_offset": 2048, 00:27:11.355 "data_size": 63488 00:27:11.355 }, 00:27:11.355 { 00:27:11.355 "name": "pt2", 00:27:11.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:11.355 "is_configured": true, 00:27:11.355 "data_offset": 2048, 00:27:11.355 "data_size": 63488 00:27:11.355 } 00:27:11.355 ] 00:27:11.355 }' 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:11.355 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.922 [2024-12-06 18:25:42.697885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:11.922 "name": "raid_bdev1", 00:27:11.922 "aliases": [ 00:27:11.922 "e4f75b50-66b3-4b71-bf13-84163eabc663" 00:27:11.922 ], 00:27:11.922 "product_name": "Raid Volume", 00:27:11.922 "block_size": 512, 00:27:11.922 "num_blocks": 63488, 00:27:11.922 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:11.922 "assigned_rate_limits": { 00:27:11.922 "rw_ios_per_sec": 0, 00:27:11.922 "rw_mbytes_per_sec": 0, 00:27:11.922 "r_mbytes_per_sec": 0, 00:27:11.922 "w_mbytes_per_sec": 0 00:27:11.922 }, 00:27:11.922 "claimed": false, 00:27:11.922 "zoned": false, 00:27:11.922 "supported_io_types": { 00:27:11.922 "read": true, 00:27:11.922 "write": true, 00:27:11.922 "unmap": false, 00:27:11.922 "flush": false, 00:27:11.922 "reset": true, 00:27:11.922 "nvme_admin": false, 00:27:11.922 "nvme_io": false, 00:27:11.922 "nvme_io_md": false, 00:27:11.922 "write_zeroes": true, 00:27:11.922 "zcopy": false, 00:27:11.922 "get_zone_info": false, 00:27:11.922 "zone_management": false, 00:27:11.922 "zone_append": false, 00:27:11.922 "compare": false, 00:27:11.922 "compare_and_write": false, 00:27:11.922 "abort": false, 00:27:11.922 "seek_hole": false, 00:27:11.922 "seek_data": false, 00:27:11.922 "copy": false, 00:27:11.922 "nvme_iov_md": false 00:27:11.922 }, 00:27:11.922 "memory_domains": [ 00:27:11.922 { 00:27:11.922 "dma_device_id": "system", 00:27:11.922 "dma_device_type": 1 00:27:11.922 }, 00:27:11.922 { 00:27:11.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:11.922 "dma_device_type": 2 00:27:11.922 }, 00:27:11.922 { 00:27:11.922 "dma_device_id": "system", 00:27:11.922 "dma_device_type": 1 00:27:11.922 }, 00:27:11.922 { 00:27:11.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:11.922 "dma_device_type": 2 00:27:11.922 } 00:27:11.922 ], 00:27:11.922 "driver_specific": { 00:27:11.922 "raid": { 00:27:11.922 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:11.922 "strip_size_kb": 0, 00:27:11.922 "state": "online", 00:27:11.922 "raid_level": "raid1", 00:27:11.922 "superblock": true, 00:27:11.922 "num_base_bdevs": 2, 00:27:11.922 "num_base_bdevs_discovered": 2, 00:27:11.922 "num_base_bdevs_operational": 2, 00:27:11.922 "base_bdevs_list": [ 00:27:11.922 { 00:27:11.922 "name": "pt1", 00:27:11.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:11.922 "is_configured": true, 00:27:11.922 "data_offset": 2048, 00:27:11.922 "data_size": 63488 00:27:11.922 }, 00:27:11.922 { 00:27:11.922 "name": "pt2", 00:27:11.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:11.922 "is_configured": true, 00:27:11.922 "data_offset": 2048, 00:27:11.922 "data_size": 63488 00:27:11.922 } 00:27:11.922 ] 00:27:11.922 } 00:27:11.922 } 00:27:11.922 }' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:11.922 pt2' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.922 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.180 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.180 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:12.180 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:12.180 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:12.180 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:12.180 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.180 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.180 [2024-12-06 18:25:42.909873] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:12.180 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e4f75b50-66b3-4b71-bf13-84163eabc663 '!=' e4f75b50-66b3-4b71-bf13-84163eabc663 ']' 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.181 [2024-12-06 18:25:42.953692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.181 18:25:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.181 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.181 "name": "raid_bdev1", 00:27:12.181 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:12.181 "strip_size_kb": 0, 00:27:12.181 "state": "online", 00:27:12.181 "raid_level": "raid1", 00:27:12.181 "superblock": true, 00:27:12.181 "num_base_bdevs": 2, 00:27:12.181 "num_base_bdevs_discovered": 1, 00:27:12.181 "num_base_bdevs_operational": 1, 00:27:12.181 "base_bdevs_list": [ 00:27:12.181 { 00:27:12.181 "name": null, 00:27:12.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.181 "is_configured": false, 00:27:12.181 "data_offset": 0, 00:27:12.181 "data_size": 63488 00:27:12.181 }, 00:27:12.181 { 00:27:12.181 "name": "pt2", 00:27:12.181 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:12.181 "is_configured": true, 00:27:12.181 "data_offset": 2048, 00:27:12.181 "data_size": 63488 00:27:12.181 } 00:27:12.181 ] 00:27:12.181 }' 00:27:12.181 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.181 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.760 [2024-12-06 18:25:43.425672] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:12.760 [2024-12-06 18:25:43.425705] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:12.760 [2024-12-06 18:25:43.425785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:12.760 [2024-12-06 18:25:43.425849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:12.760 [2024-12-06 18:25:43.425876] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.760 [2024-12-06 18:25:43.485679] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:12.760 [2024-12-06 18:25:43.485882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:12.760 [2024-12-06 18:25:43.485912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:12.760 [2024-12-06 18:25:43.485927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:12.760 [2024-12-06 18:25:43.488564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:12.760 [2024-12-06 18:25:43.488609] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:12.760 [2024-12-06 18:25:43.488696] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:12.760 [2024-12-06 18:25:43.488750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:12.760 [2024-12-06 18:25:43.488853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:12.760 [2024-12-06 18:25:43.488868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:12.760 [2024-12-06 18:25:43.489123] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:12.760 [2024-12-06 18:25:43.489293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:12.760 [2024-12-06 18:25:43.489311] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:12.760 [2024-12-06 18:25:43.489458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:12.760 pt2 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:12.760 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:12.761 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:12.761 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:12.761 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.761 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:12.761 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.761 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:12.761 "name": "raid_bdev1", 00:27:12.761 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:12.761 "strip_size_kb": 0, 00:27:12.761 "state": "online", 00:27:12.761 "raid_level": "raid1", 00:27:12.761 "superblock": true, 00:27:12.761 "num_base_bdevs": 2, 00:27:12.761 "num_base_bdevs_discovered": 1, 00:27:12.761 "num_base_bdevs_operational": 1, 00:27:12.761 "base_bdevs_list": [ 00:27:12.761 { 00:27:12.761 "name": null, 00:27:12.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:12.761 "is_configured": false, 00:27:12.761 "data_offset": 2048, 00:27:12.761 "data_size": 63488 00:27:12.761 }, 00:27:12.761 { 00:27:12.761 "name": "pt2", 00:27:12.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:12.761 "is_configured": true, 00:27:12.761 "data_offset": 2048, 00:27:12.761 "data_size": 63488 00:27:12.761 } 00:27:12.761 ] 00:27:12.761 }' 00:27:12.761 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:12.761 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.030 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:13.030 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.030 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.030 [2024-12-06 18:25:43.965619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:13.030 [2024-12-06 18:25:43.965766] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:13.030 [2024-12-06 18:25:43.965859] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:13.030 [2024-12-06 18:25:43.965912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:13.030 [2024-12-06 18:25:43.965923] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:13.030 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.289 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.289 18:25:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:27:13.289 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.289 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.289 18:25:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.290 [2024-12-06 18:25:44.025661] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:13.290 [2024-12-06 18:25:44.025729] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:13.290 [2024-12-06 18:25:44.025752] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:27:13.290 [2024-12-06 18:25:44.025764] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:13.290 [2024-12-06 18:25:44.028276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:13.290 [2024-12-06 18:25:44.028316] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:13.290 [2024-12-06 18:25:44.028407] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:13.290 [2024-12-06 18:25:44.028452] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:13.290 [2024-12-06 18:25:44.028581] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:27:13.290 [2024-12-06 18:25:44.028594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:13.290 [2024-12-06 18:25:44.028612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:27:13.290 [2024-12-06 18:25:44.028660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:13.290 [2024-12-06 18:25:44.028728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:27:13.290 [2024-12-06 18:25:44.028737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:13.290 [2024-12-06 18:25:44.028993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:27:13.290 [2024-12-06 18:25:44.029138] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:27:13.290 [2024-12-06 18:25:44.029168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:27:13.290 [2024-12-06 18:25:44.029308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:13.290 pt1 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:13.290 "name": "raid_bdev1", 00:27:13.290 "uuid": "e4f75b50-66b3-4b71-bf13-84163eabc663", 00:27:13.290 "strip_size_kb": 0, 00:27:13.290 "state": "online", 00:27:13.290 "raid_level": "raid1", 00:27:13.290 "superblock": true, 00:27:13.290 "num_base_bdevs": 2, 00:27:13.290 "num_base_bdevs_discovered": 1, 00:27:13.290 "num_base_bdevs_operational": 1, 00:27:13.290 "base_bdevs_list": [ 00:27:13.290 { 00:27:13.290 "name": null, 00:27:13.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:13.290 "is_configured": false, 00:27:13.290 "data_offset": 2048, 00:27:13.290 "data_size": 63488 00:27:13.290 }, 00:27:13.290 { 00:27:13.290 "name": "pt2", 00:27:13.290 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:13.290 "is_configured": true, 00:27:13.290 "data_offset": 2048, 00:27:13.290 "data_size": 63488 00:27:13.290 } 00:27:13.290 ] 00:27:13.290 }' 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:13.290 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.549 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:27:13.549 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:27:13.549 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.549 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.549 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:13.808 [2024-12-06 18:25:44.517888] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e4f75b50-66b3-4b71-bf13-84163eabc663 '!=' e4f75b50-66b3-4b71-bf13-84163eabc663 ']' 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62945 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62945 ']' 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62945 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62945 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.808 killing process with pid 62945 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62945' 00:27:13.808 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62945 00:27:13.809 [2024-12-06 18:25:44.602978] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:13.809 18:25:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62945 00:27:13.809 [2024-12-06 18:25:44.603082] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:13.809 [2024-12-06 18:25:44.603129] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:13.809 [2024-12-06 18:25:44.603146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:27:14.068 [2024-12-06 18:25:44.817703] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:15.455 ************************************ 00:27:15.455 END TEST raid_superblock_test 00:27:15.455 ************************************ 00:27:15.455 18:25:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:27:15.455 00:27:15.455 real 0m6.261s 00:27:15.455 user 0m9.484s 00:27:15.455 sys 0m1.321s 00:27:15.455 18:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:15.455 18:25:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.455 18:25:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:27:15.455 18:25:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:15.455 18:25:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.455 18:25:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:15.455 ************************************ 00:27:15.455 START TEST raid_read_error_test 00:27:15.455 ************************************ 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1x9Ty07EEx 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63275 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63275 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63275 ']' 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:15.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.455 18:25:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:15.455 [2024-12-06 18:25:46.167986] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:15.455 [2024-12-06 18:25:46.168124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63275 ] 00:27:15.455 [2024-12-06 18:25:46.350813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.714 [2024-12-06 18:25:46.466689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.972 [2024-12-06 18:25:46.673751] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:15.972 [2024-12-06 18:25:46.673821] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.231 BaseBdev1_malloc 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.231 true 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.231 [2024-12-06 18:25:47.067672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:16.231 [2024-12-06 18:25:47.067732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.231 [2024-12-06 18:25:47.067755] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:16.231 [2024-12-06 18:25:47.067769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.231 [2024-12-06 18:25:47.070463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.231 [2024-12-06 18:25:47.070512] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:16.231 BaseBdev1 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.231 BaseBdev2_malloc 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.231 true 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.231 [2024-12-06 18:25:47.124599] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:16.231 [2024-12-06 18:25:47.124659] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:16.231 [2024-12-06 18:25:47.124677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:16.231 [2024-12-06 18:25:47.124691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:16.231 [2024-12-06 18:25:47.127232] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:16.231 [2024-12-06 18:25:47.127268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:16.231 BaseBdev2 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.231 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.231 [2024-12-06 18:25:47.132654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:16.231 [2024-12-06 18:25:47.134884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:16.231 [2024-12-06 18:25:47.135112] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:16.232 [2024-12-06 18:25:47.135130] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:16.232 [2024-12-06 18:25:47.135453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:16.232 [2024-12-06 18:25:47.135632] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:16.232 [2024-12-06 18:25:47.135644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:16.232 [2024-12-06 18:25:47.135802] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.232 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.490 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:16.490 "name": "raid_bdev1", 00:27:16.490 "uuid": "53cefc9d-e40d-4b91-a32f-3160f4b0cfd4", 00:27:16.490 "strip_size_kb": 0, 00:27:16.490 "state": "online", 00:27:16.490 "raid_level": "raid1", 00:27:16.490 "superblock": true, 00:27:16.490 "num_base_bdevs": 2, 00:27:16.490 "num_base_bdevs_discovered": 2, 00:27:16.490 "num_base_bdevs_operational": 2, 00:27:16.490 "base_bdevs_list": [ 00:27:16.490 { 00:27:16.490 "name": "BaseBdev1", 00:27:16.490 "uuid": "c5e890f9-9601-543b-9d8f-46e53a315fe1", 00:27:16.490 "is_configured": true, 00:27:16.490 "data_offset": 2048, 00:27:16.490 "data_size": 63488 00:27:16.490 }, 00:27:16.490 { 00:27:16.490 "name": "BaseBdev2", 00:27:16.490 "uuid": "ba53f1cc-a943-50ba-a29d-ecca40c1fbc7", 00:27:16.490 "is_configured": true, 00:27:16.490 "data_offset": 2048, 00:27:16.490 "data_size": 63488 00:27:16.490 } 00:27:16.490 ] 00:27:16.490 }' 00:27:16.490 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:16.490 18:25:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:16.748 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:16.748 18:25:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:16.748 [2024-12-06 18:25:47.677435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:17.684 18:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.943 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:17.943 "name": "raid_bdev1", 00:27:17.943 "uuid": "53cefc9d-e40d-4b91-a32f-3160f4b0cfd4", 00:27:17.943 "strip_size_kb": 0, 00:27:17.943 "state": "online", 00:27:17.943 "raid_level": "raid1", 00:27:17.943 "superblock": true, 00:27:17.943 "num_base_bdevs": 2, 00:27:17.943 "num_base_bdevs_discovered": 2, 00:27:17.943 "num_base_bdevs_operational": 2, 00:27:17.943 "base_bdevs_list": [ 00:27:17.943 { 00:27:17.943 "name": "BaseBdev1", 00:27:17.943 "uuid": "c5e890f9-9601-543b-9d8f-46e53a315fe1", 00:27:17.943 "is_configured": true, 00:27:17.943 "data_offset": 2048, 00:27:17.943 "data_size": 63488 00:27:17.943 }, 00:27:17.943 { 00:27:17.943 "name": "BaseBdev2", 00:27:17.943 "uuid": "ba53f1cc-a943-50ba-a29d-ecca40c1fbc7", 00:27:17.943 "is_configured": true, 00:27:17.943 "data_offset": 2048, 00:27:17.943 "data_size": 63488 00:27:17.943 } 00:27:17.943 ] 00:27:17.943 }' 00:27:17.943 18:25:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:17.943 18:25:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:18.202 [2024-12-06 18:25:49.080465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:18.202 [2024-12-06 18:25:49.080511] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:18.202 [2024-12-06 18:25:49.083241] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:18.202 [2024-12-06 18:25:49.083297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:18.202 [2024-12-06 18:25:49.083390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:18.202 [2024-12-06 18:25:49.083405] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:18.202 { 00:27:18.202 "results": [ 00:27:18.202 { 00:27:18.202 "job": "raid_bdev1", 00:27:18.202 "core_mask": "0x1", 00:27:18.202 "workload": "randrw", 00:27:18.202 "percentage": 50, 00:27:18.202 "status": "finished", 00:27:18.202 "queue_depth": 1, 00:27:18.202 "io_size": 131072, 00:27:18.202 "runtime": 1.403306, 00:27:18.202 "iops": 18291.805208557507, 00:27:18.202 "mibps": 2286.4756510696884, 00:27:18.202 "io_failed": 0, 00:27:18.202 "io_timeout": 0, 00:27:18.202 "avg_latency_us": 51.95530958615716, 00:27:18.202 "min_latency_us": 24.880321285140564, 00:27:18.202 "max_latency_us": 1526.5413654618474 00:27:18.202 } 00:27:18.202 ], 00:27:18.202 "core_count": 1 00:27:18.202 } 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63275 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63275 ']' 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63275 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63275 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:18.202 killing process with pid 63275 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63275' 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63275 00:27:18.202 [2024-12-06 18:25:49.137050] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:18.202 18:25:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63275 00:27:18.461 [2024-12-06 18:25:49.276316] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:19.840 18:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1x9Ty07EEx 00:27:19.840 18:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:19.840 18:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:19.840 18:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:27:19.840 18:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:27:19.840 18:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:19.841 18:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:19.841 18:25:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:19.841 00:27:19.841 real 0m4.441s 00:27:19.841 user 0m5.284s 00:27:19.841 sys 0m0.640s 00:27:19.841 18:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:19.841 18:25:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.841 ************************************ 00:27:19.841 END TEST raid_read_error_test 00:27:19.841 ************************************ 00:27:19.841 18:25:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:27:19.841 18:25:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:19.841 18:25:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:19.841 18:25:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:19.841 ************************************ 00:27:19.841 START TEST raid_write_error_test 00:27:19.841 ************************************ 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.wpGprFJvTx 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63419 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63419 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63419 ']' 00:27:19.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:19.841 18:25:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:19.841 [2024-12-06 18:25:50.726424] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:19.841 [2024-12-06 18:25:50.726658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63419 ] 00:27:20.100 [2024-12-06 18:25:50.913049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.101 [2024-12-06 18:25:51.028800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.360 [2024-12-06 18:25:51.232202] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.360 [2024-12-06 18:25:51.232265] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:20.619 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.619 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:27:20.619 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:20.619 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:20.619 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.619 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.879 BaseBdev1_malloc 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.879 true 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.879 [2024-12-06 18:25:51.599205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:20.879 [2024-12-06 18:25:51.599264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.879 [2024-12-06 18:25:51.599287] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:20.879 [2024-12-06 18:25:51.599301] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.879 [2024-12-06 18:25:51.601686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.879 [2024-12-06 18:25:51.601731] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:20.879 BaseBdev1 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.879 BaseBdev2_malloc 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.879 true 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.879 [2024-12-06 18:25:51.667436] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:20.879 [2024-12-06 18:25:51.667495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:20.879 [2024-12-06 18:25:51.667514] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:20.879 [2024-12-06 18:25:51.667528] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:20.879 [2024-12-06 18:25:51.669846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:20.879 [2024-12-06 18:25:51.669888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:20.879 BaseBdev2 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.879 [2024-12-06 18:25:51.679477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:20.879 [2024-12-06 18:25:51.681539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:20.879 [2024-12-06 18:25:51.681734] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:20.879 [2024-12-06 18:25:51.681751] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:27:20.879 [2024-12-06 18:25:51.681998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:20.879 [2024-12-06 18:25:51.682184] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:20.879 [2024-12-06 18:25:51.682196] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:20.879 [2024-12-06 18:25:51.682348] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:20.879 "name": "raid_bdev1", 00:27:20.879 "uuid": "f36351e3-1ea6-4112-b259-265bdd2b1d0e", 00:27:20.879 "strip_size_kb": 0, 00:27:20.879 "state": "online", 00:27:20.879 "raid_level": "raid1", 00:27:20.879 "superblock": true, 00:27:20.879 "num_base_bdevs": 2, 00:27:20.879 "num_base_bdevs_discovered": 2, 00:27:20.879 "num_base_bdevs_operational": 2, 00:27:20.879 "base_bdevs_list": [ 00:27:20.879 { 00:27:20.879 "name": "BaseBdev1", 00:27:20.879 "uuid": "4dc9fa87-5a26-5d71-83a6-970ff5cbb9d0", 00:27:20.879 "is_configured": true, 00:27:20.879 "data_offset": 2048, 00:27:20.879 "data_size": 63488 00:27:20.879 }, 00:27:20.879 { 00:27:20.879 "name": "BaseBdev2", 00:27:20.879 "uuid": "a177ff41-71cf-53c0-a239-63b76e38844f", 00:27:20.879 "is_configured": true, 00:27:20.879 "data_offset": 2048, 00:27:20.879 "data_size": 63488 00:27:20.879 } 00:27:20.879 ] 00:27:20.879 }' 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:20.879 18:25:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:21.138 18:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:21.138 18:25:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:21.396 [2024-12-06 18:25:52.188211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:27:22.333 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:22.333 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.333 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.334 [2024-12-06 18:25:53.092067] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:27:22.334 [2024-12-06 18:25:53.092136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:22.334 [2024-12-06 18:25:53.092373] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:22.334 "name": "raid_bdev1", 00:27:22.334 "uuid": "f36351e3-1ea6-4112-b259-265bdd2b1d0e", 00:27:22.334 "strip_size_kb": 0, 00:27:22.334 "state": "online", 00:27:22.334 "raid_level": "raid1", 00:27:22.334 "superblock": true, 00:27:22.334 "num_base_bdevs": 2, 00:27:22.334 "num_base_bdevs_discovered": 1, 00:27:22.334 "num_base_bdevs_operational": 1, 00:27:22.334 "base_bdevs_list": [ 00:27:22.334 { 00:27:22.334 "name": null, 00:27:22.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:22.334 "is_configured": false, 00:27:22.334 "data_offset": 0, 00:27:22.334 "data_size": 63488 00:27:22.334 }, 00:27:22.334 { 00:27:22.334 "name": "BaseBdev2", 00:27:22.334 "uuid": "a177ff41-71cf-53c0-a239-63b76e38844f", 00:27:22.334 "is_configured": true, 00:27:22.334 "data_offset": 2048, 00:27:22.334 "data_size": 63488 00:27:22.334 } 00:27:22.334 ] 00:27:22.334 }' 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:22.334 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:22.593 [2024-12-06 18:25:53.481266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:22.593 [2024-12-06 18:25:53.481456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:22.593 [2024-12-06 18:25:53.484318] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:22.593 [2024-12-06 18:25:53.484358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:22.593 [2024-12-06 18:25:53.484415] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:22.593 [2024-12-06 18:25:53.484430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:22.593 { 00:27:22.593 "results": [ 00:27:22.593 { 00:27:22.593 "job": "raid_bdev1", 00:27:22.593 "core_mask": "0x1", 00:27:22.593 "workload": "randrw", 00:27:22.593 "percentage": 50, 00:27:22.593 "status": "finished", 00:27:22.593 "queue_depth": 1, 00:27:22.593 "io_size": 131072, 00:27:22.593 "runtime": 1.293332, 00:27:22.593 "iops": 21739.19766927595, 00:27:22.593 "mibps": 2717.3997086594936, 00:27:22.593 "io_failed": 0, 00:27:22.593 "io_timeout": 0, 00:27:22.593 "avg_latency_us": 43.22861444354742, 00:27:22.593 "min_latency_us": 23.852208835341365, 00:27:22.593 "max_latency_us": 1454.1622489959839 00:27:22.593 } 00:27:22.593 ], 00:27:22.593 "core_count": 1 00:27:22.593 } 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63419 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63419 ']' 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63419 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.593 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63419 00:27:22.853 killing process with pid 63419 00:27:22.853 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:22.853 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:22.853 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63419' 00:27:22.853 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63419 00:27:22.853 [2024-12-06 18:25:53.540947] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:22.853 18:25:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63419 00:27:22.853 [2024-12-06 18:25:53.678417] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.wpGprFJvTx 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:24.230 ************************************ 00:27:24.230 END TEST raid_write_error_test 00:27:24.230 ************************************ 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:27:24.230 00:27:24.230 real 0m4.333s 00:27:24.230 user 0m5.043s 00:27:24.230 sys 0m0.660s 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:24.230 18:25:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:24.230 18:25:54 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:27:24.230 18:25:54 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:27:24.230 18:25:54 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:27:24.230 18:25:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:24.230 18:25:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:24.230 18:25:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:24.230 ************************************ 00:27:24.230 START TEST raid_state_function_test 00:27:24.230 ************************************ 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63557 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63557' 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:24.230 Process raid pid: 63557 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63557 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63557 ']' 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:24.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:24.230 18:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:24.230 [2024-12-06 18:25:55.092168] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:24.230 [2024-12-06 18:25:55.092539] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:24.488 [2024-12-06 18:25:55.277377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.488 [2024-12-06 18:25:55.400949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.746 [2024-12-06 18:25:55.632356] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:24.746 [2024-12-06 18:25:55.632408] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.313 [2024-12-06 18:25:55.970422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:25.313 [2024-12-06 18:25:55.970493] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:25.313 [2024-12-06 18:25:55.970506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:25.313 [2024-12-06 18:25:55.970520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:25.313 [2024-12-06 18:25:55.970528] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:25.313 [2024-12-06 18:25:55.970540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.313 18:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.313 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.313 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.313 "name": "Existed_Raid", 00:27:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.313 "strip_size_kb": 64, 00:27:25.313 "state": "configuring", 00:27:25.313 "raid_level": "raid0", 00:27:25.313 "superblock": false, 00:27:25.313 "num_base_bdevs": 3, 00:27:25.313 "num_base_bdevs_discovered": 0, 00:27:25.313 "num_base_bdevs_operational": 3, 00:27:25.313 "base_bdevs_list": [ 00:27:25.313 { 00:27:25.313 "name": "BaseBdev1", 00:27:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.313 "is_configured": false, 00:27:25.313 "data_offset": 0, 00:27:25.313 "data_size": 0 00:27:25.313 }, 00:27:25.313 { 00:27:25.313 "name": "BaseBdev2", 00:27:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.313 "is_configured": false, 00:27:25.313 "data_offset": 0, 00:27:25.313 "data_size": 0 00:27:25.313 }, 00:27:25.313 { 00:27:25.313 "name": "BaseBdev3", 00:27:25.313 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.313 "is_configured": false, 00:27:25.313 "data_offset": 0, 00:27:25.313 "data_size": 0 00:27:25.313 } 00:27:25.313 ] 00:27:25.313 }' 00:27:25.313 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.313 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.572 [2024-12-06 18:25:56.429771] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:25.572 [2024-12-06 18:25:56.429820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.572 [2024-12-06 18:25:56.437781] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:25.572 [2024-12-06 18:25:56.437994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:25.572 [2024-12-06 18:25:56.438016] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:25.572 [2024-12-06 18:25:56.438031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:25.572 [2024-12-06 18:25:56.438039] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:25.572 [2024-12-06 18:25:56.438052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.572 [2024-12-06 18:25:56.485273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:25.572 BaseBdev1 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.572 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.572 [ 00:27:25.572 { 00:27:25.572 "name": "BaseBdev1", 00:27:25.572 "aliases": [ 00:27:25.572 "cd7156f3-8b63-4c0f-aea8-aa743e25e62a" 00:27:25.572 ], 00:27:25.572 "product_name": "Malloc disk", 00:27:25.572 "block_size": 512, 00:27:25.572 "num_blocks": 65536, 00:27:25.572 "uuid": "cd7156f3-8b63-4c0f-aea8-aa743e25e62a", 00:27:25.572 "assigned_rate_limits": { 00:27:25.572 "rw_ios_per_sec": 0, 00:27:25.572 "rw_mbytes_per_sec": 0, 00:27:25.572 "r_mbytes_per_sec": 0, 00:27:25.572 "w_mbytes_per_sec": 0 00:27:25.572 }, 00:27:25.572 "claimed": true, 00:27:25.831 "claim_type": "exclusive_write", 00:27:25.831 "zoned": false, 00:27:25.831 "supported_io_types": { 00:27:25.831 "read": true, 00:27:25.831 "write": true, 00:27:25.831 "unmap": true, 00:27:25.831 "flush": true, 00:27:25.831 "reset": true, 00:27:25.831 "nvme_admin": false, 00:27:25.831 "nvme_io": false, 00:27:25.831 "nvme_io_md": false, 00:27:25.831 "write_zeroes": true, 00:27:25.831 "zcopy": true, 00:27:25.831 "get_zone_info": false, 00:27:25.831 "zone_management": false, 00:27:25.831 "zone_append": false, 00:27:25.831 "compare": false, 00:27:25.831 "compare_and_write": false, 00:27:25.831 "abort": true, 00:27:25.831 "seek_hole": false, 00:27:25.831 "seek_data": false, 00:27:25.831 "copy": true, 00:27:25.831 "nvme_iov_md": false 00:27:25.831 }, 00:27:25.831 "memory_domains": [ 00:27:25.831 { 00:27:25.831 "dma_device_id": "system", 00:27:25.831 "dma_device_type": 1 00:27:25.831 }, 00:27:25.831 { 00:27:25.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:25.831 "dma_device_type": 2 00:27:25.831 } 00:27:25.831 ], 00:27:25.831 "driver_specific": {} 00:27:25.831 } 00:27:25.831 ] 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:25.831 "name": "Existed_Raid", 00:27:25.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.831 "strip_size_kb": 64, 00:27:25.831 "state": "configuring", 00:27:25.831 "raid_level": "raid0", 00:27:25.831 "superblock": false, 00:27:25.831 "num_base_bdevs": 3, 00:27:25.831 "num_base_bdevs_discovered": 1, 00:27:25.831 "num_base_bdevs_operational": 3, 00:27:25.831 "base_bdevs_list": [ 00:27:25.831 { 00:27:25.831 "name": "BaseBdev1", 00:27:25.831 "uuid": "cd7156f3-8b63-4c0f-aea8-aa743e25e62a", 00:27:25.831 "is_configured": true, 00:27:25.831 "data_offset": 0, 00:27:25.831 "data_size": 65536 00:27:25.831 }, 00:27:25.831 { 00:27:25.831 "name": "BaseBdev2", 00:27:25.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.831 "is_configured": false, 00:27:25.831 "data_offset": 0, 00:27:25.831 "data_size": 0 00:27:25.831 }, 00:27:25.831 { 00:27:25.831 "name": "BaseBdev3", 00:27:25.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:25.831 "is_configured": false, 00:27:25.831 "data_offset": 0, 00:27:25.831 "data_size": 0 00:27:25.831 } 00:27:25.831 ] 00:27:25.831 }' 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:25.831 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.090 18:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:26.090 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.090 18:25:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.090 [2024-12-06 18:25:56.996652] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:26.090 [2024-12-06 18:25:56.996710] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:26.090 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.090 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:27:26.090 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.090 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.090 [2024-12-06 18:25:57.008710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:26.090 [2024-12-06 18:25:57.010912] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:26.090 [2024-12-06 18:25:57.011175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:26.090 [2024-12-06 18:25:57.011200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:26.090 [2024-12-06 18:25:57.011218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:26.090 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.090 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:26.090 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:26.090 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.091 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.422 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.422 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.422 "name": "Existed_Raid", 00:27:26.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.422 "strip_size_kb": 64, 00:27:26.422 "state": "configuring", 00:27:26.422 "raid_level": "raid0", 00:27:26.422 "superblock": false, 00:27:26.422 "num_base_bdevs": 3, 00:27:26.422 "num_base_bdevs_discovered": 1, 00:27:26.422 "num_base_bdevs_operational": 3, 00:27:26.422 "base_bdevs_list": [ 00:27:26.422 { 00:27:26.422 "name": "BaseBdev1", 00:27:26.422 "uuid": "cd7156f3-8b63-4c0f-aea8-aa743e25e62a", 00:27:26.422 "is_configured": true, 00:27:26.422 "data_offset": 0, 00:27:26.422 "data_size": 65536 00:27:26.422 }, 00:27:26.422 { 00:27:26.422 "name": "BaseBdev2", 00:27:26.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.422 "is_configured": false, 00:27:26.422 "data_offset": 0, 00:27:26.422 "data_size": 0 00:27:26.422 }, 00:27:26.422 { 00:27:26.422 "name": "BaseBdev3", 00:27:26.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.422 "is_configured": false, 00:27:26.422 "data_offset": 0, 00:27:26.422 "data_size": 0 00:27:26.422 } 00:27:26.422 ] 00:27:26.422 }' 00:27:26.422 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.422 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.682 [2024-12-06 18:25:57.470092] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:26.682 BaseBdev2 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.682 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.682 [ 00:27:26.682 { 00:27:26.682 "name": "BaseBdev2", 00:27:26.682 "aliases": [ 00:27:26.682 "5ee877d0-d42d-48ca-9be1-a8d25026bf04" 00:27:26.682 ], 00:27:26.682 "product_name": "Malloc disk", 00:27:26.682 "block_size": 512, 00:27:26.682 "num_blocks": 65536, 00:27:26.682 "uuid": "5ee877d0-d42d-48ca-9be1-a8d25026bf04", 00:27:26.682 "assigned_rate_limits": { 00:27:26.682 "rw_ios_per_sec": 0, 00:27:26.682 "rw_mbytes_per_sec": 0, 00:27:26.682 "r_mbytes_per_sec": 0, 00:27:26.682 "w_mbytes_per_sec": 0 00:27:26.682 }, 00:27:26.682 "claimed": true, 00:27:26.682 "claim_type": "exclusive_write", 00:27:26.683 "zoned": false, 00:27:26.683 "supported_io_types": { 00:27:26.683 "read": true, 00:27:26.683 "write": true, 00:27:26.683 "unmap": true, 00:27:26.683 "flush": true, 00:27:26.683 "reset": true, 00:27:26.683 "nvme_admin": false, 00:27:26.683 "nvme_io": false, 00:27:26.683 "nvme_io_md": false, 00:27:26.683 "write_zeroes": true, 00:27:26.683 "zcopy": true, 00:27:26.683 "get_zone_info": false, 00:27:26.683 "zone_management": false, 00:27:26.683 "zone_append": false, 00:27:26.683 "compare": false, 00:27:26.683 "compare_and_write": false, 00:27:26.683 "abort": true, 00:27:26.683 "seek_hole": false, 00:27:26.683 "seek_data": false, 00:27:26.683 "copy": true, 00:27:26.683 "nvme_iov_md": false 00:27:26.683 }, 00:27:26.683 "memory_domains": [ 00:27:26.683 { 00:27:26.683 "dma_device_id": "system", 00:27:26.683 "dma_device_type": 1 00:27:26.683 }, 00:27:26.683 { 00:27:26.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:26.683 "dma_device_type": 2 00:27:26.683 } 00:27:26.683 ], 00:27:26.683 "driver_specific": {} 00:27:26.683 } 00:27:26.683 ] 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:26.683 "name": "Existed_Raid", 00:27:26.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.683 "strip_size_kb": 64, 00:27:26.683 "state": "configuring", 00:27:26.683 "raid_level": "raid0", 00:27:26.683 "superblock": false, 00:27:26.683 "num_base_bdevs": 3, 00:27:26.683 "num_base_bdevs_discovered": 2, 00:27:26.683 "num_base_bdevs_operational": 3, 00:27:26.683 "base_bdevs_list": [ 00:27:26.683 { 00:27:26.683 "name": "BaseBdev1", 00:27:26.683 "uuid": "cd7156f3-8b63-4c0f-aea8-aa743e25e62a", 00:27:26.683 "is_configured": true, 00:27:26.683 "data_offset": 0, 00:27:26.683 "data_size": 65536 00:27:26.683 }, 00:27:26.683 { 00:27:26.683 "name": "BaseBdev2", 00:27:26.683 "uuid": "5ee877d0-d42d-48ca-9be1-a8d25026bf04", 00:27:26.683 "is_configured": true, 00:27:26.683 "data_offset": 0, 00:27:26.683 "data_size": 65536 00:27:26.683 }, 00:27:26.683 { 00:27:26.683 "name": "BaseBdev3", 00:27:26.683 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:26.683 "is_configured": false, 00:27:26.683 "data_offset": 0, 00:27:26.683 "data_size": 0 00:27:26.683 } 00:27:26.683 ] 00:27:26.683 }' 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:26.683 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 18:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:27.249 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.249 18:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 [2024-12-06 18:25:58.010708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:27.249 [2024-12-06 18:25:58.010750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:27.249 [2024-12-06 18:25:58.010769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:27.249 [2024-12-06 18:25:58.011059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:27.249 [2024-12-06 18:25:58.011415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:27.249 [2024-12-06 18:25:58.011465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:27.249 [2024-12-06 18:25:58.011783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:27.249 BaseBdev3 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 [ 00:27:27.249 { 00:27:27.249 "name": "BaseBdev3", 00:27:27.249 "aliases": [ 00:27:27.249 "6ed7f9fd-f96c-4f30-8a22-ac053da931fd" 00:27:27.249 ], 00:27:27.249 "product_name": "Malloc disk", 00:27:27.249 "block_size": 512, 00:27:27.249 "num_blocks": 65536, 00:27:27.249 "uuid": "6ed7f9fd-f96c-4f30-8a22-ac053da931fd", 00:27:27.249 "assigned_rate_limits": { 00:27:27.249 "rw_ios_per_sec": 0, 00:27:27.249 "rw_mbytes_per_sec": 0, 00:27:27.249 "r_mbytes_per_sec": 0, 00:27:27.249 "w_mbytes_per_sec": 0 00:27:27.249 }, 00:27:27.249 "claimed": true, 00:27:27.249 "claim_type": "exclusive_write", 00:27:27.249 "zoned": false, 00:27:27.249 "supported_io_types": { 00:27:27.249 "read": true, 00:27:27.249 "write": true, 00:27:27.249 "unmap": true, 00:27:27.249 "flush": true, 00:27:27.249 "reset": true, 00:27:27.249 "nvme_admin": false, 00:27:27.249 "nvme_io": false, 00:27:27.249 "nvme_io_md": false, 00:27:27.249 "write_zeroes": true, 00:27:27.249 "zcopy": true, 00:27:27.249 "get_zone_info": false, 00:27:27.249 "zone_management": false, 00:27:27.249 "zone_append": false, 00:27:27.249 "compare": false, 00:27:27.249 "compare_and_write": false, 00:27:27.249 "abort": true, 00:27:27.249 "seek_hole": false, 00:27:27.249 "seek_data": false, 00:27:27.249 "copy": true, 00:27:27.249 "nvme_iov_md": false 00:27:27.249 }, 00:27:27.249 "memory_domains": [ 00:27:27.249 { 00:27:27.249 "dma_device_id": "system", 00:27:27.249 "dma_device_type": 1 00:27:27.249 }, 00:27:27.249 { 00:27:27.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.249 "dma_device_type": 2 00:27:27.249 } 00:27:27.249 ], 00:27:27.249 "driver_specific": {} 00:27:27.249 } 00:27:27.249 ] 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:27.249 "name": "Existed_Raid", 00:27:27.249 "uuid": "d5e0508e-b01d-497d-9612-461ab4390090", 00:27:27.249 "strip_size_kb": 64, 00:27:27.249 "state": "online", 00:27:27.249 "raid_level": "raid0", 00:27:27.249 "superblock": false, 00:27:27.249 "num_base_bdevs": 3, 00:27:27.249 "num_base_bdevs_discovered": 3, 00:27:27.249 "num_base_bdevs_operational": 3, 00:27:27.249 "base_bdevs_list": [ 00:27:27.249 { 00:27:27.249 "name": "BaseBdev1", 00:27:27.249 "uuid": "cd7156f3-8b63-4c0f-aea8-aa743e25e62a", 00:27:27.249 "is_configured": true, 00:27:27.249 "data_offset": 0, 00:27:27.249 "data_size": 65536 00:27:27.249 }, 00:27:27.249 { 00:27:27.249 "name": "BaseBdev2", 00:27:27.249 "uuid": "5ee877d0-d42d-48ca-9be1-a8d25026bf04", 00:27:27.249 "is_configured": true, 00:27:27.249 "data_offset": 0, 00:27:27.249 "data_size": 65536 00:27:27.249 }, 00:27:27.249 { 00:27:27.249 "name": "BaseBdev3", 00:27:27.249 "uuid": "6ed7f9fd-f96c-4f30-8a22-ac053da931fd", 00:27:27.249 "is_configured": true, 00:27:27.249 "data_offset": 0, 00:27:27.249 "data_size": 65536 00:27:27.249 } 00:27:27.249 ] 00:27:27.249 }' 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:27.249 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.814 [2024-12-06 18:25:58.502437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.814 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:27.814 "name": "Existed_Raid", 00:27:27.814 "aliases": [ 00:27:27.814 "d5e0508e-b01d-497d-9612-461ab4390090" 00:27:27.814 ], 00:27:27.814 "product_name": "Raid Volume", 00:27:27.814 "block_size": 512, 00:27:27.814 "num_blocks": 196608, 00:27:27.814 "uuid": "d5e0508e-b01d-497d-9612-461ab4390090", 00:27:27.814 "assigned_rate_limits": { 00:27:27.814 "rw_ios_per_sec": 0, 00:27:27.814 "rw_mbytes_per_sec": 0, 00:27:27.814 "r_mbytes_per_sec": 0, 00:27:27.814 "w_mbytes_per_sec": 0 00:27:27.814 }, 00:27:27.814 "claimed": false, 00:27:27.814 "zoned": false, 00:27:27.814 "supported_io_types": { 00:27:27.814 "read": true, 00:27:27.814 "write": true, 00:27:27.814 "unmap": true, 00:27:27.814 "flush": true, 00:27:27.814 "reset": true, 00:27:27.814 "nvme_admin": false, 00:27:27.814 "nvme_io": false, 00:27:27.814 "nvme_io_md": false, 00:27:27.814 "write_zeroes": true, 00:27:27.814 "zcopy": false, 00:27:27.814 "get_zone_info": false, 00:27:27.814 "zone_management": false, 00:27:27.814 "zone_append": false, 00:27:27.814 "compare": false, 00:27:27.814 "compare_and_write": false, 00:27:27.814 "abort": false, 00:27:27.814 "seek_hole": false, 00:27:27.814 "seek_data": false, 00:27:27.814 "copy": false, 00:27:27.814 "nvme_iov_md": false 00:27:27.814 }, 00:27:27.814 "memory_domains": [ 00:27:27.814 { 00:27:27.814 "dma_device_id": "system", 00:27:27.814 "dma_device_type": 1 00:27:27.814 }, 00:27:27.814 { 00:27:27.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.814 "dma_device_type": 2 00:27:27.814 }, 00:27:27.814 { 00:27:27.814 "dma_device_id": "system", 00:27:27.814 "dma_device_type": 1 00:27:27.814 }, 00:27:27.814 { 00:27:27.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.814 "dma_device_type": 2 00:27:27.814 }, 00:27:27.814 { 00:27:27.814 "dma_device_id": "system", 00:27:27.814 "dma_device_type": 1 00:27:27.814 }, 00:27:27.814 { 00:27:27.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:27.814 "dma_device_type": 2 00:27:27.814 } 00:27:27.814 ], 00:27:27.814 "driver_specific": { 00:27:27.814 "raid": { 00:27:27.814 "uuid": "d5e0508e-b01d-497d-9612-461ab4390090", 00:27:27.814 "strip_size_kb": 64, 00:27:27.814 "state": "online", 00:27:27.814 "raid_level": "raid0", 00:27:27.814 "superblock": false, 00:27:27.814 "num_base_bdevs": 3, 00:27:27.814 "num_base_bdevs_discovered": 3, 00:27:27.814 "num_base_bdevs_operational": 3, 00:27:27.814 "base_bdevs_list": [ 00:27:27.814 { 00:27:27.814 "name": "BaseBdev1", 00:27:27.814 "uuid": "cd7156f3-8b63-4c0f-aea8-aa743e25e62a", 00:27:27.814 "is_configured": true, 00:27:27.814 "data_offset": 0, 00:27:27.814 "data_size": 65536 00:27:27.814 }, 00:27:27.814 { 00:27:27.814 "name": "BaseBdev2", 00:27:27.815 "uuid": "5ee877d0-d42d-48ca-9be1-a8d25026bf04", 00:27:27.815 "is_configured": true, 00:27:27.815 "data_offset": 0, 00:27:27.815 "data_size": 65536 00:27:27.815 }, 00:27:27.815 { 00:27:27.815 "name": "BaseBdev3", 00:27:27.815 "uuid": "6ed7f9fd-f96c-4f30-8a22-ac053da931fd", 00:27:27.815 "is_configured": true, 00:27:27.815 "data_offset": 0, 00:27:27.815 "data_size": 65536 00:27:27.815 } 00:27:27.815 ] 00:27:27.815 } 00:27:27.815 } 00:27:27.815 }' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:27.815 BaseBdev2 00:27:27.815 BaseBdev3' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:27.815 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.073 [2024-12-06 18:25:58.769839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:28.073 [2024-12-06 18:25:58.769875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:28.073 [2024-12-06 18:25:58.769942] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:28.073 "name": "Existed_Raid", 00:27:28.073 "uuid": "d5e0508e-b01d-497d-9612-461ab4390090", 00:27:28.073 "strip_size_kb": 64, 00:27:28.073 "state": "offline", 00:27:28.073 "raid_level": "raid0", 00:27:28.073 "superblock": false, 00:27:28.073 "num_base_bdevs": 3, 00:27:28.073 "num_base_bdevs_discovered": 2, 00:27:28.073 "num_base_bdevs_operational": 2, 00:27:28.073 "base_bdevs_list": [ 00:27:28.073 { 00:27:28.073 "name": null, 00:27:28.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:28.073 "is_configured": false, 00:27:28.073 "data_offset": 0, 00:27:28.073 "data_size": 65536 00:27:28.073 }, 00:27:28.073 { 00:27:28.073 "name": "BaseBdev2", 00:27:28.073 "uuid": "5ee877d0-d42d-48ca-9be1-a8d25026bf04", 00:27:28.073 "is_configured": true, 00:27:28.073 "data_offset": 0, 00:27:28.073 "data_size": 65536 00:27:28.073 }, 00:27:28.073 { 00:27:28.073 "name": "BaseBdev3", 00:27:28.073 "uuid": "6ed7f9fd-f96c-4f30-8a22-ac053da931fd", 00:27:28.073 "is_configured": true, 00:27:28.073 "data_offset": 0, 00:27:28.073 "data_size": 65536 00:27:28.073 } 00:27:28.073 ] 00:27:28.073 }' 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:28.073 18:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.638 [2024-12-06 18:25:59.384750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.638 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.638 [2024-12-06 18:25:59.537361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:28.638 [2024-12-06 18:25:59.537414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.896 BaseBdev2 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.896 [ 00:27:28.896 { 00:27:28.896 "name": "BaseBdev2", 00:27:28.896 "aliases": [ 00:27:28.896 "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6" 00:27:28.896 ], 00:27:28.896 "product_name": "Malloc disk", 00:27:28.896 "block_size": 512, 00:27:28.896 "num_blocks": 65536, 00:27:28.896 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:28.896 "assigned_rate_limits": { 00:27:28.896 "rw_ios_per_sec": 0, 00:27:28.896 "rw_mbytes_per_sec": 0, 00:27:28.896 "r_mbytes_per_sec": 0, 00:27:28.896 "w_mbytes_per_sec": 0 00:27:28.896 }, 00:27:28.896 "claimed": false, 00:27:28.896 "zoned": false, 00:27:28.896 "supported_io_types": { 00:27:28.896 "read": true, 00:27:28.896 "write": true, 00:27:28.896 "unmap": true, 00:27:28.896 "flush": true, 00:27:28.896 "reset": true, 00:27:28.896 "nvme_admin": false, 00:27:28.896 "nvme_io": false, 00:27:28.896 "nvme_io_md": false, 00:27:28.896 "write_zeroes": true, 00:27:28.896 "zcopy": true, 00:27:28.896 "get_zone_info": false, 00:27:28.896 "zone_management": false, 00:27:28.896 "zone_append": false, 00:27:28.896 "compare": false, 00:27:28.896 "compare_and_write": false, 00:27:28.896 "abort": true, 00:27:28.896 "seek_hole": false, 00:27:28.896 "seek_data": false, 00:27:28.896 "copy": true, 00:27:28.896 "nvme_iov_md": false 00:27:28.896 }, 00:27:28.896 "memory_domains": [ 00:27:28.896 { 00:27:28.896 "dma_device_id": "system", 00:27:28.896 "dma_device_type": 1 00:27:28.896 }, 00:27:28.896 { 00:27:28.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.896 "dma_device_type": 2 00:27:28.896 } 00:27:28.896 ], 00:27:28.896 "driver_specific": {} 00:27:28.896 } 00:27:28.896 ] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.896 BaseBdev3 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.896 [ 00:27:28.896 { 00:27:28.896 "name": "BaseBdev3", 00:27:28.896 "aliases": [ 00:27:28.896 "33968d18-f041-4116-9de1-268bd953749e" 00:27:28.896 ], 00:27:28.896 "product_name": "Malloc disk", 00:27:28.896 "block_size": 512, 00:27:28.896 "num_blocks": 65536, 00:27:28.896 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:28.896 "assigned_rate_limits": { 00:27:28.896 "rw_ios_per_sec": 0, 00:27:28.896 "rw_mbytes_per_sec": 0, 00:27:28.896 "r_mbytes_per_sec": 0, 00:27:28.896 "w_mbytes_per_sec": 0 00:27:28.896 }, 00:27:28.896 "claimed": false, 00:27:28.896 "zoned": false, 00:27:28.896 "supported_io_types": { 00:27:28.896 "read": true, 00:27:28.896 "write": true, 00:27:28.896 "unmap": true, 00:27:28.896 "flush": true, 00:27:28.896 "reset": true, 00:27:28.896 "nvme_admin": false, 00:27:28.896 "nvme_io": false, 00:27:28.896 "nvme_io_md": false, 00:27:28.896 "write_zeroes": true, 00:27:28.896 "zcopy": true, 00:27:28.896 "get_zone_info": false, 00:27:28.896 "zone_management": false, 00:27:28.896 "zone_append": false, 00:27:28.896 "compare": false, 00:27:28.896 "compare_and_write": false, 00:27:28.896 "abort": true, 00:27:28.896 "seek_hole": false, 00:27:28.896 "seek_data": false, 00:27:28.896 "copy": true, 00:27:28.896 "nvme_iov_md": false 00:27:28.896 }, 00:27:28.896 "memory_domains": [ 00:27:28.896 { 00:27:28.896 "dma_device_id": "system", 00:27:28.896 "dma_device_type": 1 00:27:28.896 }, 00:27:28.896 { 00:27:28.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:28.896 "dma_device_type": 2 00:27:28.896 } 00:27:28.896 ], 00:27:28.896 "driver_specific": {} 00:27:28.896 } 00:27:28.896 ] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:28.896 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:28.897 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:28.897 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:27:28.897 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:28.897 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:28.897 [2024-12-06 18:25:59.841509] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:28.897 [2024-12-06 18:25:59.841675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:28.897 [2024-12-06 18:25:59.841808] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:29.154 [2024-12-06 18:25:59.844021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:29.154 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.154 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:29.154 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:29.154 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.155 "name": "Existed_Raid", 00:27:29.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.155 "strip_size_kb": 64, 00:27:29.155 "state": "configuring", 00:27:29.155 "raid_level": "raid0", 00:27:29.155 "superblock": false, 00:27:29.155 "num_base_bdevs": 3, 00:27:29.155 "num_base_bdevs_discovered": 2, 00:27:29.155 "num_base_bdevs_operational": 3, 00:27:29.155 "base_bdevs_list": [ 00:27:29.155 { 00:27:29.155 "name": "BaseBdev1", 00:27:29.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.155 "is_configured": false, 00:27:29.155 "data_offset": 0, 00:27:29.155 "data_size": 0 00:27:29.155 }, 00:27:29.155 { 00:27:29.155 "name": "BaseBdev2", 00:27:29.155 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:29.155 "is_configured": true, 00:27:29.155 "data_offset": 0, 00:27:29.155 "data_size": 65536 00:27:29.155 }, 00:27:29.155 { 00:27:29.155 "name": "BaseBdev3", 00:27:29.155 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:29.155 "is_configured": true, 00:27:29.155 "data_offset": 0, 00:27:29.155 "data_size": 65536 00:27:29.155 } 00:27:29.155 ] 00:27:29.155 }' 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.155 18:25:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.413 [2024-12-06 18:26:00.260967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.413 "name": "Existed_Raid", 00:27:29.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.413 "strip_size_kb": 64, 00:27:29.413 "state": "configuring", 00:27:29.413 "raid_level": "raid0", 00:27:29.413 "superblock": false, 00:27:29.413 "num_base_bdevs": 3, 00:27:29.413 "num_base_bdevs_discovered": 1, 00:27:29.413 "num_base_bdevs_operational": 3, 00:27:29.413 "base_bdevs_list": [ 00:27:29.413 { 00:27:29.413 "name": "BaseBdev1", 00:27:29.413 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.413 "is_configured": false, 00:27:29.413 "data_offset": 0, 00:27:29.413 "data_size": 0 00:27:29.413 }, 00:27:29.413 { 00:27:29.413 "name": null, 00:27:29.413 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:29.413 "is_configured": false, 00:27:29.413 "data_offset": 0, 00:27:29.413 "data_size": 65536 00:27:29.413 }, 00:27:29.413 { 00:27:29.413 "name": "BaseBdev3", 00:27:29.413 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:29.413 "is_configured": true, 00:27:29.413 "data_offset": 0, 00:27:29.413 "data_size": 65536 00:27:29.413 } 00:27:29.413 ] 00:27:29.413 }' 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.413 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.978 [2024-12-06 18:26:00.784896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:29.978 BaseBdev1 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.978 [ 00:27:29.978 { 00:27:29.978 "name": "BaseBdev1", 00:27:29.978 "aliases": [ 00:27:29.978 "c563fa18-f155-4413-a091-c573f24cbde3" 00:27:29.978 ], 00:27:29.978 "product_name": "Malloc disk", 00:27:29.978 "block_size": 512, 00:27:29.978 "num_blocks": 65536, 00:27:29.978 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:29.978 "assigned_rate_limits": { 00:27:29.978 "rw_ios_per_sec": 0, 00:27:29.978 "rw_mbytes_per_sec": 0, 00:27:29.978 "r_mbytes_per_sec": 0, 00:27:29.978 "w_mbytes_per_sec": 0 00:27:29.978 }, 00:27:29.978 "claimed": true, 00:27:29.978 "claim_type": "exclusive_write", 00:27:29.978 "zoned": false, 00:27:29.978 "supported_io_types": { 00:27:29.978 "read": true, 00:27:29.978 "write": true, 00:27:29.978 "unmap": true, 00:27:29.978 "flush": true, 00:27:29.978 "reset": true, 00:27:29.978 "nvme_admin": false, 00:27:29.978 "nvme_io": false, 00:27:29.978 "nvme_io_md": false, 00:27:29.978 "write_zeroes": true, 00:27:29.978 "zcopy": true, 00:27:29.978 "get_zone_info": false, 00:27:29.978 "zone_management": false, 00:27:29.978 "zone_append": false, 00:27:29.978 "compare": false, 00:27:29.978 "compare_and_write": false, 00:27:29.978 "abort": true, 00:27:29.978 "seek_hole": false, 00:27:29.978 "seek_data": false, 00:27:29.978 "copy": true, 00:27:29.978 "nvme_iov_md": false 00:27:29.978 }, 00:27:29.978 "memory_domains": [ 00:27:29.978 { 00:27:29.978 "dma_device_id": "system", 00:27:29.978 "dma_device_type": 1 00:27:29.978 }, 00:27:29.978 { 00:27:29.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:29.978 "dma_device_type": 2 00:27:29.978 } 00:27:29.978 ], 00:27:29.978 "driver_specific": {} 00:27:29.978 } 00:27:29.978 ] 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:29.978 "name": "Existed_Raid", 00:27:29.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:29.978 "strip_size_kb": 64, 00:27:29.978 "state": "configuring", 00:27:29.978 "raid_level": "raid0", 00:27:29.978 "superblock": false, 00:27:29.978 "num_base_bdevs": 3, 00:27:29.978 "num_base_bdevs_discovered": 2, 00:27:29.978 "num_base_bdevs_operational": 3, 00:27:29.978 "base_bdevs_list": [ 00:27:29.978 { 00:27:29.978 "name": "BaseBdev1", 00:27:29.978 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:29.978 "is_configured": true, 00:27:29.978 "data_offset": 0, 00:27:29.978 "data_size": 65536 00:27:29.978 }, 00:27:29.978 { 00:27:29.978 "name": null, 00:27:29.978 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:29.978 "is_configured": false, 00:27:29.978 "data_offset": 0, 00:27:29.978 "data_size": 65536 00:27:29.978 }, 00:27:29.978 { 00:27:29.978 "name": "BaseBdev3", 00:27:29.978 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:29.978 "is_configured": true, 00:27:29.978 "data_offset": 0, 00:27:29.978 "data_size": 65536 00:27:29.978 } 00:27:29.978 ] 00:27:29.978 }' 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:29.978 18:26:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 [2024-12-06 18:26:01.304283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:30.598 "name": "Existed_Raid", 00:27:30.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.598 "strip_size_kb": 64, 00:27:30.598 "state": "configuring", 00:27:30.598 "raid_level": "raid0", 00:27:30.598 "superblock": false, 00:27:30.598 "num_base_bdevs": 3, 00:27:30.598 "num_base_bdevs_discovered": 1, 00:27:30.598 "num_base_bdevs_operational": 3, 00:27:30.598 "base_bdevs_list": [ 00:27:30.598 { 00:27:30.598 "name": "BaseBdev1", 00:27:30.598 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:30.598 "is_configured": true, 00:27:30.598 "data_offset": 0, 00:27:30.598 "data_size": 65536 00:27:30.598 }, 00:27:30.598 { 00:27:30.598 "name": null, 00:27:30.598 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:30.598 "is_configured": false, 00:27:30.598 "data_offset": 0, 00:27:30.598 "data_size": 65536 00:27:30.598 }, 00:27:30.598 { 00:27:30.598 "name": null, 00:27:30.598 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:30.598 "is_configured": false, 00:27:30.598 "data_offset": 0, 00:27:30.598 "data_size": 65536 00:27:30.598 } 00:27:30.598 ] 00:27:30.598 }' 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:30.598 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.871 [2024-12-06 18:26:01.747744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.871 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:30.871 "name": "Existed_Raid", 00:27:30.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:30.871 "strip_size_kb": 64, 00:27:30.871 "state": "configuring", 00:27:30.871 "raid_level": "raid0", 00:27:30.871 "superblock": false, 00:27:30.871 "num_base_bdevs": 3, 00:27:30.871 "num_base_bdevs_discovered": 2, 00:27:30.871 "num_base_bdevs_operational": 3, 00:27:30.871 "base_bdevs_list": [ 00:27:30.871 { 00:27:30.871 "name": "BaseBdev1", 00:27:30.871 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:30.871 "is_configured": true, 00:27:30.871 "data_offset": 0, 00:27:30.871 "data_size": 65536 00:27:30.871 }, 00:27:30.872 { 00:27:30.872 "name": null, 00:27:30.872 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:30.872 "is_configured": false, 00:27:30.872 "data_offset": 0, 00:27:30.872 "data_size": 65536 00:27:30.872 }, 00:27:30.872 { 00:27:30.872 "name": "BaseBdev3", 00:27:30.872 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:30.872 "is_configured": true, 00:27:30.872 "data_offset": 0, 00:27:30.872 "data_size": 65536 00:27:30.872 } 00:27:30.872 ] 00:27:30.872 }' 00:27:30.872 18:26:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:30.872 18:26:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.439 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.439 [2024-12-06 18:26:02.179164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:31.440 "name": "Existed_Raid", 00:27:31.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:31.440 "strip_size_kb": 64, 00:27:31.440 "state": "configuring", 00:27:31.440 "raid_level": "raid0", 00:27:31.440 "superblock": false, 00:27:31.440 "num_base_bdevs": 3, 00:27:31.440 "num_base_bdevs_discovered": 1, 00:27:31.440 "num_base_bdevs_operational": 3, 00:27:31.440 "base_bdevs_list": [ 00:27:31.440 { 00:27:31.440 "name": null, 00:27:31.440 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:31.440 "is_configured": false, 00:27:31.440 "data_offset": 0, 00:27:31.440 "data_size": 65536 00:27:31.440 }, 00:27:31.440 { 00:27:31.440 "name": null, 00:27:31.440 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:31.440 "is_configured": false, 00:27:31.440 "data_offset": 0, 00:27:31.440 "data_size": 65536 00:27:31.440 }, 00:27:31.440 { 00:27:31.440 "name": "BaseBdev3", 00:27:31.440 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:31.440 "is_configured": true, 00:27:31.440 "data_offset": 0, 00:27:31.440 "data_size": 65536 00:27:31.440 } 00:27:31.440 ] 00:27:31.440 }' 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:31.440 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.006 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.007 [2024-12-06 18:26:02.722651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:32.007 "name": "Existed_Raid", 00:27:32.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:32.007 "strip_size_kb": 64, 00:27:32.007 "state": "configuring", 00:27:32.007 "raid_level": "raid0", 00:27:32.007 "superblock": false, 00:27:32.007 "num_base_bdevs": 3, 00:27:32.007 "num_base_bdevs_discovered": 2, 00:27:32.007 "num_base_bdevs_operational": 3, 00:27:32.007 "base_bdevs_list": [ 00:27:32.007 { 00:27:32.007 "name": null, 00:27:32.007 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:32.007 "is_configured": false, 00:27:32.007 "data_offset": 0, 00:27:32.007 "data_size": 65536 00:27:32.007 }, 00:27:32.007 { 00:27:32.007 "name": "BaseBdev2", 00:27:32.007 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:32.007 "is_configured": true, 00:27:32.007 "data_offset": 0, 00:27:32.007 "data_size": 65536 00:27:32.007 }, 00:27:32.007 { 00:27:32.007 "name": "BaseBdev3", 00:27:32.007 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:32.007 "is_configured": true, 00:27:32.007 "data_offset": 0, 00:27:32.007 "data_size": 65536 00:27:32.007 } 00:27:32.007 ] 00:27:32.007 }' 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:32.007 18:26:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.265 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c563fa18-f155-4413-a091-c573f24cbde3 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.524 [2024-12-06 18:26:03.285748] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:32.524 [2024-12-06 18:26:03.285816] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:32.524 [2024-12-06 18:26:03.285828] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:27:32.524 [2024-12-06 18:26:03.286148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:32.524 [2024-12-06 18:26:03.286345] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:32.524 [2024-12-06 18:26:03.286358] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:32.524 [2024-12-06 18:26:03.286638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:32.524 NewBaseBdev 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.524 [ 00:27:32.524 { 00:27:32.524 "name": "NewBaseBdev", 00:27:32.524 "aliases": [ 00:27:32.524 "c563fa18-f155-4413-a091-c573f24cbde3" 00:27:32.524 ], 00:27:32.524 "product_name": "Malloc disk", 00:27:32.524 "block_size": 512, 00:27:32.524 "num_blocks": 65536, 00:27:32.524 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:32.524 "assigned_rate_limits": { 00:27:32.524 "rw_ios_per_sec": 0, 00:27:32.524 "rw_mbytes_per_sec": 0, 00:27:32.524 "r_mbytes_per_sec": 0, 00:27:32.524 "w_mbytes_per_sec": 0 00:27:32.524 }, 00:27:32.524 "claimed": true, 00:27:32.524 "claim_type": "exclusive_write", 00:27:32.524 "zoned": false, 00:27:32.524 "supported_io_types": { 00:27:32.524 "read": true, 00:27:32.524 "write": true, 00:27:32.524 "unmap": true, 00:27:32.524 "flush": true, 00:27:32.524 "reset": true, 00:27:32.524 "nvme_admin": false, 00:27:32.524 "nvme_io": false, 00:27:32.524 "nvme_io_md": false, 00:27:32.524 "write_zeroes": true, 00:27:32.524 "zcopy": true, 00:27:32.524 "get_zone_info": false, 00:27:32.524 "zone_management": false, 00:27:32.524 "zone_append": false, 00:27:32.524 "compare": false, 00:27:32.524 "compare_and_write": false, 00:27:32.524 "abort": true, 00:27:32.524 "seek_hole": false, 00:27:32.524 "seek_data": false, 00:27:32.524 "copy": true, 00:27:32.524 "nvme_iov_md": false 00:27:32.524 }, 00:27:32.524 "memory_domains": [ 00:27:32.524 { 00:27:32.524 "dma_device_id": "system", 00:27:32.524 "dma_device_type": 1 00:27:32.524 }, 00:27:32.524 { 00:27:32.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:32.524 "dma_device_type": 2 00:27:32.524 } 00:27:32.524 ], 00:27:32.524 "driver_specific": {} 00:27:32.524 } 00:27:32.524 ] 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:27:32.524 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:32.525 "name": "Existed_Raid", 00:27:32.525 "uuid": "a41ecc3b-35e8-4d06-a987-1bdc55db38f1", 00:27:32.525 "strip_size_kb": 64, 00:27:32.525 "state": "online", 00:27:32.525 "raid_level": "raid0", 00:27:32.525 "superblock": false, 00:27:32.525 "num_base_bdevs": 3, 00:27:32.525 "num_base_bdevs_discovered": 3, 00:27:32.525 "num_base_bdevs_operational": 3, 00:27:32.525 "base_bdevs_list": [ 00:27:32.525 { 00:27:32.525 "name": "NewBaseBdev", 00:27:32.525 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:32.525 "is_configured": true, 00:27:32.525 "data_offset": 0, 00:27:32.525 "data_size": 65536 00:27:32.525 }, 00:27:32.525 { 00:27:32.525 "name": "BaseBdev2", 00:27:32.525 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:32.525 "is_configured": true, 00:27:32.525 "data_offset": 0, 00:27:32.525 "data_size": 65536 00:27:32.525 }, 00:27:32.525 { 00:27:32.525 "name": "BaseBdev3", 00:27:32.525 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:32.525 "is_configured": true, 00:27:32.525 "data_offset": 0, 00:27:32.525 "data_size": 65536 00:27:32.525 } 00:27:32.525 ] 00:27:32.525 }' 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:32.525 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.093 [2024-12-06 18:26:03.786060] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.093 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:33.093 "name": "Existed_Raid", 00:27:33.093 "aliases": [ 00:27:33.093 "a41ecc3b-35e8-4d06-a987-1bdc55db38f1" 00:27:33.093 ], 00:27:33.093 "product_name": "Raid Volume", 00:27:33.093 "block_size": 512, 00:27:33.093 "num_blocks": 196608, 00:27:33.093 "uuid": "a41ecc3b-35e8-4d06-a987-1bdc55db38f1", 00:27:33.093 "assigned_rate_limits": { 00:27:33.093 "rw_ios_per_sec": 0, 00:27:33.093 "rw_mbytes_per_sec": 0, 00:27:33.093 "r_mbytes_per_sec": 0, 00:27:33.093 "w_mbytes_per_sec": 0 00:27:33.093 }, 00:27:33.093 "claimed": false, 00:27:33.093 "zoned": false, 00:27:33.093 "supported_io_types": { 00:27:33.093 "read": true, 00:27:33.093 "write": true, 00:27:33.093 "unmap": true, 00:27:33.093 "flush": true, 00:27:33.093 "reset": true, 00:27:33.093 "nvme_admin": false, 00:27:33.093 "nvme_io": false, 00:27:33.093 "nvme_io_md": false, 00:27:33.093 "write_zeroes": true, 00:27:33.093 "zcopy": false, 00:27:33.093 "get_zone_info": false, 00:27:33.093 "zone_management": false, 00:27:33.093 "zone_append": false, 00:27:33.093 "compare": false, 00:27:33.093 "compare_and_write": false, 00:27:33.093 "abort": false, 00:27:33.093 "seek_hole": false, 00:27:33.093 "seek_data": false, 00:27:33.093 "copy": false, 00:27:33.093 "nvme_iov_md": false 00:27:33.093 }, 00:27:33.093 "memory_domains": [ 00:27:33.093 { 00:27:33.093 "dma_device_id": "system", 00:27:33.094 "dma_device_type": 1 00:27:33.094 }, 00:27:33.094 { 00:27:33.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:33.094 "dma_device_type": 2 00:27:33.094 }, 00:27:33.094 { 00:27:33.094 "dma_device_id": "system", 00:27:33.094 "dma_device_type": 1 00:27:33.094 }, 00:27:33.094 { 00:27:33.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:33.094 "dma_device_type": 2 00:27:33.094 }, 00:27:33.094 { 00:27:33.094 "dma_device_id": "system", 00:27:33.094 "dma_device_type": 1 00:27:33.094 }, 00:27:33.094 { 00:27:33.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:33.094 "dma_device_type": 2 00:27:33.094 } 00:27:33.094 ], 00:27:33.094 "driver_specific": { 00:27:33.094 "raid": { 00:27:33.094 "uuid": "a41ecc3b-35e8-4d06-a987-1bdc55db38f1", 00:27:33.094 "strip_size_kb": 64, 00:27:33.094 "state": "online", 00:27:33.094 "raid_level": "raid0", 00:27:33.094 "superblock": false, 00:27:33.094 "num_base_bdevs": 3, 00:27:33.094 "num_base_bdevs_discovered": 3, 00:27:33.094 "num_base_bdevs_operational": 3, 00:27:33.094 "base_bdevs_list": [ 00:27:33.094 { 00:27:33.094 "name": "NewBaseBdev", 00:27:33.094 "uuid": "c563fa18-f155-4413-a091-c573f24cbde3", 00:27:33.094 "is_configured": true, 00:27:33.094 "data_offset": 0, 00:27:33.094 "data_size": 65536 00:27:33.094 }, 00:27:33.094 { 00:27:33.094 "name": "BaseBdev2", 00:27:33.094 "uuid": "8b17bd81-f89f-40f8-82d1-cf5397a1d5c6", 00:27:33.094 "is_configured": true, 00:27:33.094 "data_offset": 0, 00:27:33.094 "data_size": 65536 00:27:33.094 }, 00:27:33.094 { 00:27:33.094 "name": "BaseBdev3", 00:27:33.094 "uuid": "33968d18-f041-4116-9de1-268bd953749e", 00:27:33.094 "is_configured": true, 00:27:33.094 "data_offset": 0, 00:27:33.094 "data_size": 65536 00:27:33.094 } 00:27:33.094 ] 00:27:33.094 } 00:27:33.094 } 00:27:33.094 }' 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:33.094 BaseBdev2 00:27:33.094 BaseBdev3' 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.094 18:26:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.094 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.094 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:33.094 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:33.094 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:33.094 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:33.094 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:33.094 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.094 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:33.353 [2024-12-06 18:26:04.069762] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:33.353 [2024-12-06 18:26:04.069794] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:33.353 [2024-12-06 18:26:04.069878] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:33.353 [2024-12-06 18:26:04.069933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:33.353 [2024-12-06 18:26:04.069955] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63557 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63557 ']' 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63557 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63557 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63557' 00:27:33.353 killing process with pid 63557 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63557 00:27:33.353 [2024-12-06 18:26:04.113979] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:33.353 18:26:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63557 00:27:33.613 [2024-12-06 18:26:04.426928] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:27:34.990 00:27:34.990 real 0m10.625s 00:27:34.990 user 0m16.780s 00:27:34.990 sys 0m2.176s 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:27:34.990 ************************************ 00:27:34.990 END TEST raid_state_function_test 00:27:34.990 ************************************ 00:27:34.990 18:26:05 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:27:34.990 18:26:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:34.990 18:26:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:34.990 18:26:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:34.990 ************************************ 00:27:34.990 START TEST raid_state_function_test_sb 00:27:34.990 ************************************ 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:27:34.990 Process raid pid: 64184 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64184 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64184' 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64184 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64184 ']' 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.990 18:26:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:34.990 [2024-12-06 18:26:05.811938] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:34.990 [2024-12-06 18:26:05.812384] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:35.248 [2024-12-06 18:26:05.998947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.248 [2024-12-06 18:26:06.123634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.507 [2024-12-06 18:26:06.347201] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:35.507 [2024-12-06 18:26:06.347477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:36.072 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.072 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:27:36.072 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:27:36.072 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.072 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.072 [2024-12-06 18:26:06.757324] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:36.073 [2024-12-06 18:26:06.757392] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:36.073 [2024-12-06 18:26:06.757422] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:36.073 [2024-12-06 18:26:06.757436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:36.073 [2024-12-06 18:26:06.757445] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:36.073 [2024-12-06 18:26:06.757458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:36.073 "name": "Existed_Raid", 00:27:36.073 "uuid": "54b076f7-36cc-4bfd-8db1-473e70081740", 00:27:36.073 "strip_size_kb": 64, 00:27:36.073 "state": "configuring", 00:27:36.073 "raid_level": "raid0", 00:27:36.073 "superblock": true, 00:27:36.073 "num_base_bdevs": 3, 00:27:36.073 "num_base_bdevs_discovered": 0, 00:27:36.073 "num_base_bdevs_operational": 3, 00:27:36.073 "base_bdevs_list": [ 00:27:36.073 { 00:27:36.073 "name": "BaseBdev1", 00:27:36.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.073 "is_configured": false, 00:27:36.073 "data_offset": 0, 00:27:36.073 "data_size": 0 00:27:36.073 }, 00:27:36.073 { 00:27:36.073 "name": "BaseBdev2", 00:27:36.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.073 "is_configured": false, 00:27:36.073 "data_offset": 0, 00:27:36.073 "data_size": 0 00:27:36.073 }, 00:27:36.073 { 00:27:36.073 "name": "BaseBdev3", 00:27:36.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.073 "is_configured": false, 00:27:36.073 "data_offset": 0, 00:27:36.073 "data_size": 0 00:27:36.073 } 00:27:36.073 ] 00:27:36.073 }' 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:36.073 18:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.331 [2024-12-06 18:26:07.216610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:36.331 [2024-12-06 18:26:07.216655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.331 [2024-12-06 18:26:07.228625] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:36.331 [2024-12-06 18:26:07.228832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:36.331 [2024-12-06 18:26:07.228925] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:36.331 [2024-12-06 18:26:07.228969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:36.331 [2024-12-06 18:26:07.229051] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:36.331 [2024-12-06 18:26:07.229092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.331 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.590 [2024-12-06 18:26:07.279271] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:36.590 BaseBdev1 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.590 [ 00:27:36.590 { 00:27:36.590 "name": "BaseBdev1", 00:27:36.590 "aliases": [ 00:27:36.590 "0b5dc682-6f08-4695-91f2-0e1f7dc1c044" 00:27:36.590 ], 00:27:36.590 "product_name": "Malloc disk", 00:27:36.590 "block_size": 512, 00:27:36.590 "num_blocks": 65536, 00:27:36.590 "uuid": "0b5dc682-6f08-4695-91f2-0e1f7dc1c044", 00:27:36.590 "assigned_rate_limits": { 00:27:36.590 "rw_ios_per_sec": 0, 00:27:36.590 "rw_mbytes_per_sec": 0, 00:27:36.590 "r_mbytes_per_sec": 0, 00:27:36.590 "w_mbytes_per_sec": 0 00:27:36.590 }, 00:27:36.590 "claimed": true, 00:27:36.590 "claim_type": "exclusive_write", 00:27:36.590 "zoned": false, 00:27:36.590 "supported_io_types": { 00:27:36.590 "read": true, 00:27:36.590 "write": true, 00:27:36.590 "unmap": true, 00:27:36.590 "flush": true, 00:27:36.590 "reset": true, 00:27:36.590 "nvme_admin": false, 00:27:36.590 "nvme_io": false, 00:27:36.590 "nvme_io_md": false, 00:27:36.590 "write_zeroes": true, 00:27:36.590 "zcopy": true, 00:27:36.590 "get_zone_info": false, 00:27:36.590 "zone_management": false, 00:27:36.590 "zone_append": false, 00:27:36.590 "compare": false, 00:27:36.590 "compare_and_write": false, 00:27:36.590 "abort": true, 00:27:36.590 "seek_hole": false, 00:27:36.590 "seek_data": false, 00:27:36.590 "copy": true, 00:27:36.590 "nvme_iov_md": false 00:27:36.590 }, 00:27:36.590 "memory_domains": [ 00:27:36.590 { 00:27:36.590 "dma_device_id": "system", 00:27:36.590 "dma_device_type": 1 00:27:36.590 }, 00:27:36.590 { 00:27:36.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:36.590 "dma_device_type": 2 00:27:36.590 } 00:27:36.590 ], 00:27:36.590 "driver_specific": {} 00:27:36.590 } 00:27:36.590 ] 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:36.590 "name": "Existed_Raid", 00:27:36.590 "uuid": "ac26f0e6-49cf-47e4-b2cc-d1cc27e554b0", 00:27:36.590 "strip_size_kb": 64, 00:27:36.590 "state": "configuring", 00:27:36.590 "raid_level": "raid0", 00:27:36.590 "superblock": true, 00:27:36.590 "num_base_bdevs": 3, 00:27:36.590 "num_base_bdevs_discovered": 1, 00:27:36.590 "num_base_bdevs_operational": 3, 00:27:36.590 "base_bdevs_list": [ 00:27:36.590 { 00:27:36.590 "name": "BaseBdev1", 00:27:36.590 "uuid": "0b5dc682-6f08-4695-91f2-0e1f7dc1c044", 00:27:36.590 "is_configured": true, 00:27:36.590 "data_offset": 2048, 00:27:36.590 "data_size": 63488 00:27:36.590 }, 00:27:36.590 { 00:27:36.590 "name": "BaseBdev2", 00:27:36.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.590 "is_configured": false, 00:27:36.590 "data_offset": 0, 00:27:36.590 "data_size": 0 00:27:36.590 }, 00:27:36.590 { 00:27:36.590 "name": "BaseBdev3", 00:27:36.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:36.590 "is_configured": false, 00:27:36.590 "data_offset": 0, 00:27:36.590 "data_size": 0 00:27:36.590 } 00:27:36.590 ] 00:27:36.590 }' 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:36.590 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.157 [2024-12-06 18:26:07.802693] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:37.157 [2024-12-06 18:26:07.802772] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.157 [2024-12-06 18:26:07.814780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:37.157 [2024-12-06 18:26:07.817130] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:27:37.157 [2024-12-06 18:26:07.817192] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:27:37.157 [2024-12-06 18:26:07.817206] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:27:37.157 [2024-12-06 18:26:07.817220] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:37.157 "name": "Existed_Raid", 00:27:37.157 "uuid": "a7e36bdd-2d95-4106-af69-edff0bf68658", 00:27:37.157 "strip_size_kb": 64, 00:27:37.157 "state": "configuring", 00:27:37.157 "raid_level": "raid0", 00:27:37.157 "superblock": true, 00:27:37.157 "num_base_bdevs": 3, 00:27:37.157 "num_base_bdevs_discovered": 1, 00:27:37.157 "num_base_bdevs_operational": 3, 00:27:37.157 "base_bdevs_list": [ 00:27:37.157 { 00:27:37.157 "name": "BaseBdev1", 00:27:37.157 "uuid": "0b5dc682-6f08-4695-91f2-0e1f7dc1c044", 00:27:37.157 "is_configured": true, 00:27:37.157 "data_offset": 2048, 00:27:37.157 "data_size": 63488 00:27:37.157 }, 00:27:37.157 { 00:27:37.157 "name": "BaseBdev2", 00:27:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.157 "is_configured": false, 00:27:37.157 "data_offset": 0, 00:27:37.157 "data_size": 0 00:27:37.157 }, 00:27:37.157 { 00:27:37.157 "name": "BaseBdev3", 00:27:37.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.157 "is_configured": false, 00:27:37.157 "data_offset": 0, 00:27:37.157 "data_size": 0 00:27:37.157 } 00:27:37.157 ] 00:27:37.157 }' 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:37.157 18:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 [2024-12-06 18:26:08.326619] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:37.416 BaseBdev2 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.416 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.416 [ 00:27:37.416 { 00:27:37.416 "name": "BaseBdev2", 00:27:37.416 "aliases": [ 00:27:37.416 "ec2b4491-2472-4390-86ff-5b171694be47" 00:27:37.416 ], 00:27:37.416 "product_name": "Malloc disk", 00:27:37.416 "block_size": 512, 00:27:37.416 "num_blocks": 65536, 00:27:37.416 "uuid": "ec2b4491-2472-4390-86ff-5b171694be47", 00:27:37.416 "assigned_rate_limits": { 00:27:37.416 "rw_ios_per_sec": 0, 00:27:37.416 "rw_mbytes_per_sec": 0, 00:27:37.416 "r_mbytes_per_sec": 0, 00:27:37.416 "w_mbytes_per_sec": 0 00:27:37.416 }, 00:27:37.416 "claimed": true, 00:27:37.416 "claim_type": "exclusive_write", 00:27:37.416 "zoned": false, 00:27:37.416 "supported_io_types": { 00:27:37.416 "read": true, 00:27:37.416 "write": true, 00:27:37.416 "unmap": true, 00:27:37.416 "flush": true, 00:27:37.416 "reset": true, 00:27:37.675 "nvme_admin": false, 00:27:37.675 "nvme_io": false, 00:27:37.675 "nvme_io_md": false, 00:27:37.675 "write_zeroes": true, 00:27:37.675 "zcopy": true, 00:27:37.675 "get_zone_info": false, 00:27:37.675 "zone_management": false, 00:27:37.675 "zone_append": false, 00:27:37.675 "compare": false, 00:27:37.675 "compare_and_write": false, 00:27:37.675 "abort": true, 00:27:37.675 "seek_hole": false, 00:27:37.675 "seek_data": false, 00:27:37.675 "copy": true, 00:27:37.675 "nvme_iov_md": false 00:27:37.675 }, 00:27:37.675 "memory_domains": [ 00:27:37.675 { 00:27:37.675 "dma_device_id": "system", 00:27:37.675 "dma_device_type": 1 00:27:37.675 }, 00:27:37.675 { 00:27:37.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:37.675 "dma_device_type": 2 00:27:37.675 } 00:27:37.675 ], 00:27:37.675 "driver_specific": {} 00:27:37.675 } 00:27:37.675 ] 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:37.675 "name": "Existed_Raid", 00:27:37.675 "uuid": "a7e36bdd-2d95-4106-af69-edff0bf68658", 00:27:37.675 "strip_size_kb": 64, 00:27:37.675 "state": "configuring", 00:27:37.675 "raid_level": "raid0", 00:27:37.675 "superblock": true, 00:27:37.675 "num_base_bdevs": 3, 00:27:37.675 "num_base_bdevs_discovered": 2, 00:27:37.675 "num_base_bdevs_operational": 3, 00:27:37.675 "base_bdevs_list": [ 00:27:37.675 { 00:27:37.675 "name": "BaseBdev1", 00:27:37.675 "uuid": "0b5dc682-6f08-4695-91f2-0e1f7dc1c044", 00:27:37.675 "is_configured": true, 00:27:37.675 "data_offset": 2048, 00:27:37.675 "data_size": 63488 00:27:37.675 }, 00:27:37.675 { 00:27:37.675 "name": "BaseBdev2", 00:27:37.675 "uuid": "ec2b4491-2472-4390-86ff-5b171694be47", 00:27:37.675 "is_configured": true, 00:27:37.675 "data_offset": 2048, 00:27:37.675 "data_size": 63488 00:27:37.675 }, 00:27:37.675 { 00:27:37.675 "name": "BaseBdev3", 00:27:37.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:37.675 "is_configured": false, 00:27:37.675 "data_offset": 0, 00:27:37.675 "data_size": 0 00:27:37.675 } 00:27:37.675 ] 00:27:37.675 }' 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:37.675 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:37.934 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:37.934 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:37.934 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.194 [2024-12-06 18:26:08.893005] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:38.194 [2024-12-06 18:26:08.893305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:38.194 [2024-12-06 18:26:08.893329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:38.194 [2024-12-06 18:26:08.893608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:38.194 [2024-12-06 18:26:08.893804] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:38.194 [2024-12-06 18:26:08.893822] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:27:38.194 [2024-12-06 18:26:08.893977] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:38.194 BaseBdev3 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.194 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.194 [ 00:27:38.194 { 00:27:38.194 "name": "BaseBdev3", 00:27:38.194 "aliases": [ 00:27:38.194 "16e86b96-7b60-4d6d-ae41-89070e0a8073" 00:27:38.194 ], 00:27:38.194 "product_name": "Malloc disk", 00:27:38.194 "block_size": 512, 00:27:38.194 "num_blocks": 65536, 00:27:38.194 "uuid": "16e86b96-7b60-4d6d-ae41-89070e0a8073", 00:27:38.194 "assigned_rate_limits": { 00:27:38.194 "rw_ios_per_sec": 0, 00:27:38.194 "rw_mbytes_per_sec": 0, 00:27:38.194 "r_mbytes_per_sec": 0, 00:27:38.194 "w_mbytes_per_sec": 0 00:27:38.194 }, 00:27:38.195 "claimed": true, 00:27:38.195 "claim_type": "exclusive_write", 00:27:38.195 "zoned": false, 00:27:38.195 "supported_io_types": { 00:27:38.195 "read": true, 00:27:38.195 "write": true, 00:27:38.195 "unmap": true, 00:27:38.195 "flush": true, 00:27:38.195 "reset": true, 00:27:38.195 "nvme_admin": false, 00:27:38.195 "nvme_io": false, 00:27:38.195 "nvme_io_md": false, 00:27:38.195 "write_zeroes": true, 00:27:38.195 "zcopy": true, 00:27:38.195 "get_zone_info": false, 00:27:38.195 "zone_management": false, 00:27:38.195 "zone_append": false, 00:27:38.195 "compare": false, 00:27:38.195 "compare_and_write": false, 00:27:38.195 "abort": true, 00:27:38.195 "seek_hole": false, 00:27:38.195 "seek_data": false, 00:27:38.195 "copy": true, 00:27:38.195 "nvme_iov_md": false 00:27:38.195 }, 00:27:38.195 "memory_domains": [ 00:27:38.195 { 00:27:38.195 "dma_device_id": "system", 00:27:38.195 "dma_device_type": 1 00:27:38.195 }, 00:27:38.195 { 00:27:38.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.195 "dma_device_type": 2 00:27:38.195 } 00:27:38.195 ], 00:27:38.195 "driver_specific": {} 00:27:38.195 } 00:27:38.195 ] 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.195 "name": "Existed_Raid", 00:27:38.195 "uuid": "a7e36bdd-2d95-4106-af69-edff0bf68658", 00:27:38.195 "strip_size_kb": 64, 00:27:38.195 "state": "online", 00:27:38.195 "raid_level": "raid0", 00:27:38.195 "superblock": true, 00:27:38.195 "num_base_bdevs": 3, 00:27:38.195 "num_base_bdevs_discovered": 3, 00:27:38.195 "num_base_bdevs_operational": 3, 00:27:38.195 "base_bdevs_list": [ 00:27:38.195 { 00:27:38.195 "name": "BaseBdev1", 00:27:38.195 "uuid": "0b5dc682-6f08-4695-91f2-0e1f7dc1c044", 00:27:38.195 "is_configured": true, 00:27:38.195 "data_offset": 2048, 00:27:38.195 "data_size": 63488 00:27:38.195 }, 00:27:38.195 { 00:27:38.195 "name": "BaseBdev2", 00:27:38.195 "uuid": "ec2b4491-2472-4390-86ff-5b171694be47", 00:27:38.195 "is_configured": true, 00:27:38.195 "data_offset": 2048, 00:27:38.195 "data_size": 63488 00:27:38.195 }, 00:27:38.195 { 00:27:38.195 "name": "BaseBdev3", 00:27:38.195 "uuid": "16e86b96-7b60-4d6d-ae41-89070e0a8073", 00:27:38.195 "is_configured": true, 00:27:38.195 "data_offset": 2048, 00:27:38.195 "data_size": 63488 00:27:38.195 } 00:27:38.195 ] 00:27:38.195 }' 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.195 18:26:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.454 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:27:38.454 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:38.454 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:38.454 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:38.454 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:38.455 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:38.455 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:38.455 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:38.455 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.455 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.455 [2024-12-06 18:26:09.388695] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:38.714 "name": "Existed_Raid", 00:27:38.714 "aliases": [ 00:27:38.714 "a7e36bdd-2d95-4106-af69-edff0bf68658" 00:27:38.714 ], 00:27:38.714 "product_name": "Raid Volume", 00:27:38.714 "block_size": 512, 00:27:38.714 "num_blocks": 190464, 00:27:38.714 "uuid": "a7e36bdd-2d95-4106-af69-edff0bf68658", 00:27:38.714 "assigned_rate_limits": { 00:27:38.714 "rw_ios_per_sec": 0, 00:27:38.714 "rw_mbytes_per_sec": 0, 00:27:38.714 "r_mbytes_per_sec": 0, 00:27:38.714 "w_mbytes_per_sec": 0 00:27:38.714 }, 00:27:38.714 "claimed": false, 00:27:38.714 "zoned": false, 00:27:38.714 "supported_io_types": { 00:27:38.714 "read": true, 00:27:38.714 "write": true, 00:27:38.714 "unmap": true, 00:27:38.714 "flush": true, 00:27:38.714 "reset": true, 00:27:38.714 "nvme_admin": false, 00:27:38.714 "nvme_io": false, 00:27:38.714 "nvme_io_md": false, 00:27:38.714 "write_zeroes": true, 00:27:38.714 "zcopy": false, 00:27:38.714 "get_zone_info": false, 00:27:38.714 "zone_management": false, 00:27:38.714 "zone_append": false, 00:27:38.714 "compare": false, 00:27:38.714 "compare_and_write": false, 00:27:38.714 "abort": false, 00:27:38.714 "seek_hole": false, 00:27:38.714 "seek_data": false, 00:27:38.714 "copy": false, 00:27:38.714 "nvme_iov_md": false 00:27:38.714 }, 00:27:38.714 "memory_domains": [ 00:27:38.714 { 00:27:38.714 "dma_device_id": "system", 00:27:38.714 "dma_device_type": 1 00:27:38.714 }, 00:27:38.714 { 00:27:38.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.714 "dma_device_type": 2 00:27:38.714 }, 00:27:38.714 { 00:27:38.714 "dma_device_id": "system", 00:27:38.714 "dma_device_type": 1 00:27:38.714 }, 00:27:38.714 { 00:27:38.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.714 "dma_device_type": 2 00:27:38.714 }, 00:27:38.714 { 00:27:38.714 "dma_device_id": "system", 00:27:38.714 "dma_device_type": 1 00:27:38.714 }, 00:27:38.714 { 00:27:38.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:38.714 "dma_device_type": 2 00:27:38.714 } 00:27:38.714 ], 00:27:38.714 "driver_specific": { 00:27:38.714 "raid": { 00:27:38.714 "uuid": "a7e36bdd-2d95-4106-af69-edff0bf68658", 00:27:38.714 "strip_size_kb": 64, 00:27:38.714 "state": "online", 00:27:38.714 "raid_level": "raid0", 00:27:38.714 "superblock": true, 00:27:38.714 "num_base_bdevs": 3, 00:27:38.714 "num_base_bdevs_discovered": 3, 00:27:38.714 "num_base_bdevs_operational": 3, 00:27:38.714 "base_bdevs_list": [ 00:27:38.714 { 00:27:38.714 "name": "BaseBdev1", 00:27:38.714 "uuid": "0b5dc682-6f08-4695-91f2-0e1f7dc1c044", 00:27:38.714 "is_configured": true, 00:27:38.714 "data_offset": 2048, 00:27:38.714 "data_size": 63488 00:27:38.714 }, 00:27:38.714 { 00:27:38.714 "name": "BaseBdev2", 00:27:38.714 "uuid": "ec2b4491-2472-4390-86ff-5b171694be47", 00:27:38.714 "is_configured": true, 00:27:38.714 "data_offset": 2048, 00:27:38.714 "data_size": 63488 00:27:38.714 }, 00:27:38.714 { 00:27:38.714 "name": "BaseBdev3", 00:27:38.714 "uuid": "16e86b96-7b60-4d6d-ae41-89070e0a8073", 00:27:38.714 "is_configured": true, 00:27:38.714 "data_offset": 2048, 00:27:38.714 "data_size": 63488 00:27:38.714 } 00:27:38.714 ] 00:27:38.714 } 00:27:38.714 } 00:27:38.714 }' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:27:38.714 BaseBdev2 00:27:38.714 BaseBdev3' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.714 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.714 [2024-12-06 18:26:09.636109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:38.714 [2024-12-06 18:26:09.636143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:38.715 [2024-12-06 18:26:09.636210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:38.998 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:38.999 "name": "Existed_Raid", 00:27:38.999 "uuid": "a7e36bdd-2d95-4106-af69-edff0bf68658", 00:27:38.999 "strip_size_kb": 64, 00:27:38.999 "state": "offline", 00:27:38.999 "raid_level": "raid0", 00:27:38.999 "superblock": true, 00:27:38.999 "num_base_bdevs": 3, 00:27:38.999 "num_base_bdevs_discovered": 2, 00:27:38.999 "num_base_bdevs_operational": 2, 00:27:38.999 "base_bdevs_list": [ 00:27:38.999 { 00:27:38.999 "name": null, 00:27:38.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:38.999 "is_configured": false, 00:27:38.999 "data_offset": 0, 00:27:38.999 "data_size": 63488 00:27:38.999 }, 00:27:38.999 { 00:27:38.999 "name": "BaseBdev2", 00:27:38.999 "uuid": "ec2b4491-2472-4390-86ff-5b171694be47", 00:27:38.999 "is_configured": true, 00:27:38.999 "data_offset": 2048, 00:27:38.999 "data_size": 63488 00:27:38.999 }, 00:27:38.999 { 00:27:38.999 "name": "BaseBdev3", 00:27:38.999 "uuid": "16e86b96-7b60-4d6d-ae41-89070e0a8073", 00:27:38.999 "is_configured": true, 00:27:38.999 "data_offset": 2048, 00:27:38.999 "data_size": 63488 00:27:38.999 } 00:27:38.999 ] 00:27:38.999 }' 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:38.999 18:26:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.257 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:27:39.257 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:39.257 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:39.257 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.257 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.257 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 [2024-12-06 18:26:10.239824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.516 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.516 [2024-12-06 18:26:10.394413] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:39.516 [2024-12-06 18:26:10.394475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.776 BaseBdev2 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.776 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.776 [ 00:27:39.776 { 00:27:39.776 "name": "BaseBdev2", 00:27:39.776 "aliases": [ 00:27:39.776 "3146b9b6-d3a1-4cff-a795-1e20534d9d00" 00:27:39.776 ], 00:27:39.776 "product_name": "Malloc disk", 00:27:39.776 "block_size": 512, 00:27:39.776 "num_blocks": 65536, 00:27:39.776 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:39.776 "assigned_rate_limits": { 00:27:39.776 "rw_ios_per_sec": 0, 00:27:39.776 "rw_mbytes_per_sec": 0, 00:27:39.776 "r_mbytes_per_sec": 0, 00:27:39.776 "w_mbytes_per_sec": 0 00:27:39.776 }, 00:27:39.776 "claimed": false, 00:27:39.776 "zoned": false, 00:27:39.776 "supported_io_types": { 00:27:39.776 "read": true, 00:27:39.776 "write": true, 00:27:39.776 "unmap": true, 00:27:39.776 "flush": true, 00:27:39.776 "reset": true, 00:27:39.776 "nvme_admin": false, 00:27:39.776 "nvme_io": false, 00:27:39.776 "nvme_io_md": false, 00:27:39.776 "write_zeroes": true, 00:27:39.776 "zcopy": true, 00:27:39.776 "get_zone_info": false, 00:27:39.776 "zone_management": false, 00:27:39.776 "zone_append": false, 00:27:39.776 "compare": false, 00:27:39.776 "compare_and_write": false, 00:27:39.776 "abort": true, 00:27:39.776 "seek_hole": false, 00:27:39.776 "seek_data": false, 00:27:39.777 "copy": true, 00:27:39.777 "nvme_iov_md": false 00:27:39.777 }, 00:27:39.777 "memory_domains": [ 00:27:39.777 { 00:27:39.777 "dma_device_id": "system", 00:27:39.777 "dma_device_type": 1 00:27:39.777 }, 00:27:39.777 { 00:27:39.777 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:39.777 "dma_device_type": 2 00:27:39.777 } 00:27:39.777 ], 00:27:39.777 "driver_specific": {} 00:27:39.777 } 00:27:39.777 ] 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.777 BaseBdev3 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:39.777 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:39.777 [ 00:27:39.777 { 00:27:39.777 "name": "BaseBdev3", 00:27:39.777 "aliases": [ 00:27:39.777 "787770e3-9230-4b10-a990-e5d128167a3d" 00:27:39.777 ], 00:27:39.777 "product_name": "Malloc disk", 00:27:39.777 "block_size": 512, 00:27:39.777 "num_blocks": 65536, 00:27:39.777 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:39.777 "assigned_rate_limits": { 00:27:39.777 "rw_ios_per_sec": 0, 00:27:39.777 "rw_mbytes_per_sec": 0, 00:27:39.777 "r_mbytes_per_sec": 0, 00:27:39.777 "w_mbytes_per_sec": 0 00:27:39.777 }, 00:27:39.777 "claimed": false, 00:27:39.777 "zoned": false, 00:27:39.777 "supported_io_types": { 00:27:39.777 "read": true, 00:27:39.777 "write": true, 00:27:39.777 "unmap": true, 00:27:39.777 "flush": true, 00:27:39.777 "reset": true, 00:27:40.035 "nvme_admin": false, 00:27:40.035 "nvme_io": false, 00:27:40.035 "nvme_io_md": false, 00:27:40.035 "write_zeroes": true, 00:27:40.035 "zcopy": true, 00:27:40.035 "get_zone_info": false, 00:27:40.035 "zone_management": false, 00:27:40.035 "zone_append": false, 00:27:40.035 "compare": false, 00:27:40.035 "compare_and_write": false, 00:27:40.035 "abort": true, 00:27:40.035 "seek_hole": false, 00:27:40.035 "seek_data": false, 00:27:40.035 "copy": true, 00:27:40.035 "nvme_iov_md": false 00:27:40.035 }, 00:27:40.035 "memory_domains": [ 00:27:40.035 { 00:27:40.035 "dma_device_id": "system", 00:27:40.035 "dma_device_type": 1 00:27:40.035 }, 00:27:40.035 { 00:27:40.036 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.036 "dma_device_type": 2 00:27:40.036 } 00:27:40.036 ], 00:27:40.036 "driver_specific": {} 00:27:40.036 } 00:27:40.036 ] 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.036 [2024-12-06 18:26:10.738990] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:27:40.036 [2024-12-06 18:26:10.739064] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:27:40.036 [2024-12-06 18:26:10.739093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:40.036 [2024-12-06 18:26:10.741308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.036 "name": "Existed_Raid", 00:27:40.036 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:40.036 "strip_size_kb": 64, 00:27:40.036 "state": "configuring", 00:27:40.036 "raid_level": "raid0", 00:27:40.036 "superblock": true, 00:27:40.036 "num_base_bdevs": 3, 00:27:40.036 "num_base_bdevs_discovered": 2, 00:27:40.036 "num_base_bdevs_operational": 3, 00:27:40.036 "base_bdevs_list": [ 00:27:40.036 { 00:27:40.036 "name": "BaseBdev1", 00:27:40.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.036 "is_configured": false, 00:27:40.036 "data_offset": 0, 00:27:40.036 "data_size": 0 00:27:40.036 }, 00:27:40.036 { 00:27:40.036 "name": "BaseBdev2", 00:27:40.036 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:40.036 "is_configured": true, 00:27:40.036 "data_offset": 2048, 00:27:40.036 "data_size": 63488 00:27:40.036 }, 00:27:40.036 { 00:27:40.036 "name": "BaseBdev3", 00:27:40.036 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:40.036 "is_configured": true, 00:27:40.036 "data_offset": 2048, 00:27:40.036 "data_size": 63488 00:27:40.036 } 00:27:40.036 ] 00:27:40.036 }' 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.036 18:26:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.294 [2024-12-06 18:26:11.174387] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.294 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.294 "name": "Existed_Raid", 00:27:40.294 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:40.294 "strip_size_kb": 64, 00:27:40.294 "state": "configuring", 00:27:40.294 "raid_level": "raid0", 00:27:40.294 "superblock": true, 00:27:40.294 "num_base_bdevs": 3, 00:27:40.294 "num_base_bdevs_discovered": 1, 00:27:40.294 "num_base_bdevs_operational": 3, 00:27:40.294 "base_bdevs_list": [ 00:27:40.294 { 00:27:40.294 "name": "BaseBdev1", 00:27:40.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:27:40.294 "is_configured": false, 00:27:40.294 "data_offset": 0, 00:27:40.294 "data_size": 0 00:27:40.294 }, 00:27:40.294 { 00:27:40.294 "name": null, 00:27:40.294 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:40.294 "is_configured": false, 00:27:40.294 "data_offset": 0, 00:27:40.294 "data_size": 63488 00:27:40.294 }, 00:27:40.294 { 00:27:40.295 "name": "BaseBdev3", 00:27:40.295 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:40.295 "is_configured": true, 00:27:40.295 "data_offset": 2048, 00:27:40.295 "data_size": 63488 00:27:40.295 } 00:27:40.295 ] 00:27:40.295 }' 00:27:40.295 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.295 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.863 [2024-12-06 18:26:11.706093] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:40.863 BaseBdev1 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.863 [ 00:27:40.863 { 00:27:40.863 "name": "BaseBdev1", 00:27:40.863 "aliases": [ 00:27:40.863 "c78a884e-0cfb-407e-9f83-07523c8760b7" 00:27:40.863 ], 00:27:40.863 "product_name": "Malloc disk", 00:27:40.863 "block_size": 512, 00:27:40.863 "num_blocks": 65536, 00:27:40.863 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:40.863 "assigned_rate_limits": { 00:27:40.863 "rw_ios_per_sec": 0, 00:27:40.863 "rw_mbytes_per_sec": 0, 00:27:40.863 "r_mbytes_per_sec": 0, 00:27:40.863 "w_mbytes_per_sec": 0 00:27:40.863 }, 00:27:40.863 "claimed": true, 00:27:40.863 "claim_type": "exclusive_write", 00:27:40.863 "zoned": false, 00:27:40.863 "supported_io_types": { 00:27:40.863 "read": true, 00:27:40.863 "write": true, 00:27:40.863 "unmap": true, 00:27:40.863 "flush": true, 00:27:40.863 "reset": true, 00:27:40.863 "nvme_admin": false, 00:27:40.863 "nvme_io": false, 00:27:40.863 "nvme_io_md": false, 00:27:40.863 "write_zeroes": true, 00:27:40.863 "zcopy": true, 00:27:40.863 "get_zone_info": false, 00:27:40.863 "zone_management": false, 00:27:40.863 "zone_append": false, 00:27:40.863 "compare": false, 00:27:40.863 "compare_and_write": false, 00:27:40.863 "abort": true, 00:27:40.863 "seek_hole": false, 00:27:40.863 "seek_data": false, 00:27:40.863 "copy": true, 00:27:40.863 "nvme_iov_md": false 00:27:40.863 }, 00:27:40.863 "memory_domains": [ 00:27:40.863 { 00:27:40.863 "dma_device_id": "system", 00:27:40.863 "dma_device_type": 1 00:27:40.863 }, 00:27:40.863 { 00:27:40.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:40.863 "dma_device_type": 2 00:27:40.863 } 00:27:40.863 ], 00:27:40.863 "driver_specific": {} 00:27:40.863 } 00:27:40.863 ] 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.863 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:40.863 "name": "Existed_Raid", 00:27:40.863 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:40.863 "strip_size_kb": 64, 00:27:40.863 "state": "configuring", 00:27:40.863 "raid_level": "raid0", 00:27:40.863 "superblock": true, 00:27:40.863 "num_base_bdevs": 3, 00:27:40.863 "num_base_bdevs_discovered": 2, 00:27:40.863 "num_base_bdevs_operational": 3, 00:27:40.863 "base_bdevs_list": [ 00:27:40.863 { 00:27:40.863 "name": "BaseBdev1", 00:27:40.863 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:40.863 "is_configured": true, 00:27:40.863 "data_offset": 2048, 00:27:40.864 "data_size": 63488 00:27:40.864 }, 00:27:40.864 { 00:27:40.864 "name": null, 00:27:40.864 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:40.864 "is_configured": false, 00:27:40.864 "data_offset": 0, 00:27:40.864 "data_size": 63488 00:27:40.864 }, 00:27:40.864 { 00:27:40.864 "name": "BaseBdev3", 00:27:40.864 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:40.864 "is_configured": true, 00:27:40.864 "data_offset": 2048, 00:27:40.864 "data_size": 63488 00:27:40.864 } 00:27:40.864 ] 00:27:40.864 }' 00:27:40.864 18:26:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:40.864 18:26:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.431 [2024-12-06 18:26:12.233813] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.431 "name": "Existed_Raid", 00:27:41.431 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:41.431 "strip_size_kb": 64, 00:27:41.431 "state": "configuring", 00:27:41.431 "raid_level": "raid0", 00:27:41.431 "superblock": true, 00:27:41.431 "num_base_bdevs": 3, 00:27:41.431 "num_base_bdevs_discovered": 1, 00:27:41.431 "num_base_bdevs_operational": 3, 00:27:41.431 "base_bdevs_list": [ 00:27:41.431 { 00:27:41.431 "name": "BaseBdev1", 00:27:41.431 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:41.431 "is_configured": true, 00:27:41.431 "data_offset": 2048, 00:27:41.431 "data_size": 63488 00:27:41.431 }, 00:27:41.431 { 00:27:41.431 "name": null, 00:27:41.431 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:41.431 "is_configured": false, 00:27:41.431 "data_offset": 0, 00:27:41.431 "data_size": 63488 00:27:41.431 }, 00:27:41.431 { 00:27:41.431 "name": null, 00:27:41.431 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:41.431 "is_configured": false, 00:27:41.431 "data_offset": 0, 00:27:41.431 "data_size": 63488 00:27:41.431 } 00:27:41.431 ] 00:27:41.431 }' 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.431 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.998 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.999 [2024-12-06 18:26:12.733871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:41.999 "name": "Existed_Raid", 00:27:41.999 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:41.999 "strip_size_kb": 64, 00:27:41.999 "state": "configuring", 00:27:41.999 "raid_level": "raid0", 00:27:41.999 "superblock": true, 00:27:41.999 "num_base_bdevs": 3, 00:27:41.999 "num_base_bdevs_discovered": 2, 00:27:41.999 "num_base_bdevs_operational": 3, 00:27:41.999 "base_bdevs_list": [ 00:27:41.999 { 00:27:41.999 "name": "BaseBdev1", 00:27:41.999 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:41.999 "is_configured": true, 00:27:41.999 "data_offset": 2048, 00:27:41.999 "data_size": 63488 00:27:41.999 }, 00:27:41.999 { 00:27:41.999 "name": null, 00:27:41.999 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:41.999 "is_configured": false, 00:27:41.999 "data_offset": 0, 00:27:41.999 "data_size": 63488 00:27:41.999 }, 00:27:41.999 { 00:27:41.999 "name": "BaseBdev3", 00:27:41.999 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:41.999 "is_configured": true, 00:27:41.999 "data_offset": 2048, 00:27:41.999 "data_size": 63488 00:27:41.999 } 00:27:41.999 ] 00:27:41.999 }' 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:41.999 18:26:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.258 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.258 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.258 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.258 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:27:42.258 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.517 [2024-12-06 18:26:13.233873] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:42.517 "name": "Existed_Raid", 00:27:42.517 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:42.517 "strip_size_kb": 64, 00:27:42.517 "state": "configuring", 00:27:42.517 "raid_level": "raid0", 00:27:42.517 "superblock": true, 00:27:42.517 "num_base_bdevs": 3, 00:27:42.517 "num_base_bdevs_discovered": 1, 00:27:42.517 "num_base_bdevs_operational": 3, 00:27:42.517 "base_bdevs_list": [ 00:27:42.517 { 00:27:42.517 "name": null, 00:27:42.517 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:42.517 "is_configured": false, 00:27:42.517 "data_offset": 0, 00:27:42.517 "data_size": 63488 00:27:42.517 }, 00:27:42.517 { 00:27:42.517 "name": null, 00:27:42.517 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:42.517 "is_configured": false, 00:27:42.517 "data_offset": 0, 00:27:42.517 "data_size": 63488 00:27:42.517 }, 00:27:42.517 { 00:27:42.517 "name": "BaseBdev3", 00:27:42.517 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:42.517 "is_configured": true, 00:27:42.517 "data_offset": 2048, 00:27:42.517 "data_size": 63488 00:27:42.517 } 00:27:42.517 ] 00:27:42.517 }' 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:42.517 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.084 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.084 [2024-12-06 18:26:13.794618] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:43.085 "name": "Existed_Raid", 00:27:43.085 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:43.085 "strip_size_kb": 64, 00:27:43.085 "state": "configuring", 00:27:43.085 "raid_level": "raid0", 00:27:43.085 "superblock": true, 00:27:43.085 "num_base_bdevs": 3, 00:27:43.085 "num_base_bdevs_discovered": 2, 00:27:43.085 "num_base_bdevs_operational": 3, 00:27:43.085 "base_bdevs_list": [ 00:27:43.085 { 00:27:43.085 "name": null, 00:27:43.085 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:43.085 "is_configured": false, 00:27:43.085 "data_offset": 0, 00:27:43.085 "data_size": 63488 00:27:43.085 }, 00:27:43.085 { 00:27:43.085 "name": "BaseBdev2", 00:27:43.085 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:43.085 "is_configured": true, 00:27:43.085 "data_offset": 2048, 00:27:43.085 "data_size": 63488 00:27:43.085 }, 00:27:43.085 { 00:27:43.085 "name": "BaseBdev3", 00:27:43.085 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:43.085 "is_configured": true, 00:27:43.085 "data_offset": 2048, 00:27:43.085 "data_size": 63488 00:27:43.085 } 00:27:43.085 ] 00:27:43.085 }' 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:43.085 18:26:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.344 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.344 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.344 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.344 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:27:43.344 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.344 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c78a884e-0cfb-407e-9f83-07523c8760b7 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.603 [2024-12-06 18:26:14.376714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:27:43.603 [2024-12-06 18:26:14.376962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:43.603 [2024-12-06 18:26:14.376981] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:43.603 [2024-12-06 18:26:14.377253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:27:43.603 [2024-12-06 18:26:14.377393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:43.603 [2024-12-06 18:26:14.377404] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:27:43.603 [2024-12-06 18:26:14.377557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:43.603 NewBaseBdev 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.603 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.604 [ 00:27:43.604 { 00:27:43.604 "name": "NewBaseBdev", 00:27:43.604 "aliases": [ 00:27:43.604 "c78a884e-0cfb-407e-9f83-07523c8760b7" 00:27:43.604 ], 00:27:43.604 "product_name": "Malloc disk", 00:27:43.604 "block_size": 512, 00:27:43.604 "num_blocks": 65536, 00:27:43.604 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:43.604 "assigned_rate_limits": { 00:27:43.604 "rw_ios_per_sec": 0, 00:27:43.604 "rw_mbytes_per_sec": 0, 00:27:43.604 "r_mbytes_per_sec": 0, 00:27:43.604 "w_mbytes_per_sec": 0 00:27:43.604 }, 00:27:43.604 "claimed": true, 00:27:43.604 "claim_type": "exclusive_write", 00:27:43.604 "zoned": false, 00:27:43.604 "supported_io_types": { 00:27:43.604 "read": true, 00:27:43.604 "write": true, 00:27:43.604 "unmap": true, 00:27:43.604 "flush": true, 00:27:43.604 "reset": true, 00:27:43.604 "nvme_admin": false, 00:27:43.604 "nvme_io": false, 00:27:43.604 "nvme_io_md": false, 00:27:43.604 "write_zeroes": true, 00:27:43.604 "zcopy": true, 00:27:43.604 "get_zone_info": false, 00:27:43.604 "zone_management": false, 00:27:43.604 "zone_append": false, 00:27:43.604 "compare": false, 00:27:43.604 "compare_and_write": false, 00:27:43.604 "abort": true, 00:27:43.604 "seek_hole": false, 00:27:43.604 "seek_data": false, 00:27:43.604 "copy": true, 00:27:43.604 "nvme_iov_md": false 00:27:43.604 }, 00:27:43.604 "memory_domains": [ 00:27:43.604 { 00:27:43.604 "dma_device_id": "system", 00:27:43.604 "dma_device_type": 1 00:27:43.604 }, 00:27:43.604 { 00:27:43.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:43.604 "dma_device_type": 2 00:27:43.604 } 00:27:43.604 ], 00:27:43.604 "driver_specific": {} 00:27:43.604 } 00:27:43.604 ] 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:43.604 "name": "Existed_Raid", 00:27:43.604 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:43.604 "strip_size_kb": 64, 00:27:43.604 "state": "online", 00:27:43.604 "raid_level": "raid0", 00:27:43.604 "superblock": true, 00:27:43.604 "num_base_bdevs": 3, 00:27:43.604 "num_base_bdevs_discovered": 3, 00:27:43.604 "num_base_bdevs_operational": 3, 00:27:43.604 "base_bdevs_list": [ 00:27:43.604 { 00:27:43.604 "name": "NewBaseBdev", 00:27:43.604 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:43.604 "is_configured": true, 00:27:43.604 "data_offset": 2048, 00:27:43.604 "data_size": 63488 00:27:43.604 }, 00:27:43.604 { 00:27:43.604 "name": "BaseBdev2", 00:27:43.604 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:43.604 "is_configured": true, 00:27:43.604 "data_offset": 2048, 00:27:43.604 "data_size": 63488 00:27:43.604 }, 00:27:43.604 { 00:27:43.604 "name": "BaseBdev3", 00:27:43.604 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:43.604 "is_configured": true, 00:27:43.604 "data_offset": 2048, 00:27:43.604 "data_size": 63488 00:27:43.604 } 00:27:43.604 ] 00:27:43.604 }' 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:43.604 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:44.170 [2024-12-06 18:26:14.872399] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.170 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:44.170 "name": "Existed_Raid", 00:27:44.170 "aliases": [ 00:27:44.170 "4e2612d0-532f-432c-a6da-0c07a92ea4d1" 00:27:44.170 ], 00:27:44.170 "product_name": "Raid Volume", 00:27:44.170 "block_size": 512, 00:27:44.170 "num_blocks": 190464, 00:27:44.170 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:44.170 "assigned_rate_limits": { 00:27:44.170 "rw_ios_per_sec": 0, 00:27:44.170 "rw_mbytes_per_sec": 0, 00:27:44.170 "r_mbytes_per_sec": 0, 00:27:44.170 "w_mbytes_per_sec": 0 00:27:44.170 }, 00:27:44.170 "claimed": false, 00:27:44.170 "zoned": false, 00:27:44.170 "supported_io_types": { 00:27:44.170 "read": true, 00:27:44.170 "write": true, 00:27:44.170 "unmap": true, 00:27:44.170 "flush": true, 00:27:44.170 "reset": true, 00:27:44.170 "nvme_admin": false, 00:27:44.170 "nvme_io": false, 00:27:44.171 "nvme_io_md": false, 00:27:44.171 "write_zeroes": true, 00:27:44.171 "zcopy": false, 00:27:44.171 "get_zone_info": false, 00:27:44.171 "zone_management": false, 00:27:44.171 "zone_append": false, 00:27:44.171 "compare": false, 00:27:44.171 "compare_and_write": false, 00:27:44.171 "abort": false, 00:27:44.171 "seek_hole": false, 00:27:44.171 "seek_data": false, 00:27:44.171 "copy": false, 00:27:44.171 "nvme_iov_md": false 00:27:44.171 }, 00:27:44.171 "memory_domains": [ 00:27:44.171 { 00:27:44.171 "dma_device_id": "system", 00:27:44.171 "dma_device_type": 1 00:27:44.171 }, 00:27:44.171 { 00:27:44.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.171 "dma_device_type": 2 00:27:44.171 }, 00:27:44.171 { 00:27:44.171 "dma_device_id": "system", 00:27:44.171 "dma_device_type": 1 00:27:44.171 }, 00:27:44.171 { 00:27:44.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.171 "dma_device_type": 2 00:27:44.171 }, 00:27:44.171 { 00:27:44.171 "dma_device_id": "system", 00:27:44.171 "dma_device_type": 1 00:27:44.171 }, 00:27:44.171 { 00:27:44.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:44.171 "dma_device_type": 2 00:27:44.171 } 00:27:44.171 ], 00:27:44.171 "driver_specific": { 00:27:44.171 "raid": { 00:27:44.171 "uuid": "4e2612d0-532f-432c-a6da-0c07a92ea4d1", 00:27:44.171 "strip_size_kb": 64, 00:27:44.171 "state": "online", 00:27:44.171 "raid_level": "raid0", 00:27:44.171 "superblock": true, 00:27:44.171 "num_base_bdevs": 3, 00:27:44.171 "num_base_bdevs_discovered": 3, 00:27:44.171 "num_base_bdevs_operational": 3, 00:27:44.171 "base_bdevs_list": [ 00:27:44.171 { 00:27:44.171 "name": "NewBaseBdev", 00:27:44.171 "uuid": "c78a884e-0cfb-407e-9f83-07523c8760b7", 00:27:44.171 "is_configured": true, 00:27:44.171 "data_offset": 2048, 00:27:44.171 "data_size": 63488 00:27:44.171 }, 00:27:44.171 { 00:27:44.171 "name": "BaseBdev2", 00:27:44.171 "uuid": "3146b9b6-d3a1-4cff-a795-1e20534d9d00", 00:27:44.171 "is_configured": true, 00:27:44.171 "data_offset": 2048, 00:27:44.171 "data_size": 63488 00:27:44.171 }, 00:27:44.171 { 00:27:44.171 "name": "BaseBdev3", 00:27:44.171 "uuid": "787770e3-9230-4b10-a990-e5d128167a3d", 00:27:44.171 "is_configured": true, 00:27:44.171 "data_offset": 2048, 00:27:44.171 "data_size": 63488 00:27:44.171 } 00:27:44.171 ] 00:27:44.171 } 00:27:44.171 } 00:27:44.171 }' 00:27:44.171 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:44.171 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:27:44.171 BaseBdev2 00:27:44.171 BaseBdev3' 00:27:44.171 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:44.171 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:44.171 18:26:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:44.171 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.428 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.428 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:44.428 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:44.428 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:27:44.428 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.428 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:44.428 [2024-12-06 18:26:15.147691] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:27:44.429 [2024-12-06 18:26:15.147726] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:44.429 [2024-12-06 18:26:15.147804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:44.429 [2024-12-06 18:26:15.147856] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:44.429 [2024-12-06 18:26:15.147871] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64184 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64184 ']' 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64184 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64184 00:27:44.429 killing process with pid 64184 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64184' 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64184 00:27:44.429 [2024-12-06 18:26:15.204808] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:44.429 18:26:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64184 00:27:44.686 [2024-12-06 18:26:15.508629] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:46.057 ************************************ 00:27:46.057 END TEST raid_state_function_test_sb 00:27:46.057 ************************************ 00:27:46.057 18:26:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:27:46.057 00:27:46.057 real 0m10.966s 00:27:46.057 user 0m17.442s 00:27:46.057 sys 0m2.242s 00:27:46.057 18:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:46.057 18:26:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:27:46.057 18:26:16 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:27:46.057 18:26:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:46.057 18:26:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:46.057 18:26:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:46.057 ************************************ 00:27:46.057 START TEST raid_superblock_test 00:27:46.057 ************************************ 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64804 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64804 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64804 ']' 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:46.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:46.057 18:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.057 [2024-12-06 18:26:16.845927] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:46.057 [2024-12-06 18:26:16.846101] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64804 ] 00:27:46.315 [2024-12-06 18:26:17.028331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.315 [2024-12-06 18:26:17.142158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.572 [2024-12-06 18:26:17.347442] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:46.572 [2024-12-06 18:26:17.347491] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.830 malloc1 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:46.830 [2024-12-06 18:26:17.749860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:46.830 [2024-12-06 18:26:17.750132] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:46.830 [2024-12-06 18:26:17.750177] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:27:46.830 [2024-12-06 18:26:17.750190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:46.830 [2024-12-06 18:26:17.752565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:46.830 [2024-12-06 18:26:17.752605] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:46.830 pt1 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.830 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.088 malloc2 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.088 [2024-12-06 18:26:17.806774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:47.088 [2024-12-06 18:26:17.806957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:47.088 [2024-12-06 18:26:17.807069] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:27:47.088 [2024-12-06 18:26:17.807178] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:47.088 [2024-12-06 18:26:17.809605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:47.088 [2024-12-06 18:26:17.809771] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:47.088 pt2 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.088 malloc3 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.088 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.089 [2024-12-06 18:26:17.882159] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:47.089 [2024-12-06 18:26:17.882223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:47.089 [2024-12-06 18:26:17.882250] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:27:47.089 [2024-12-06 18:26:17.882262] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:47.089 [2024-12-06 18:26:17.884716] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:47.089 [2024-12-06 18:26:17.884768] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:47.089 pt3 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.089 [2024-12-06 18:26:17.894224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:47.089 [2024-12-06 18:26:17.896376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:47.089 [2024-12-06 18:26:17.896450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:47.089 [2024-12-06 18:26:17.896618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:27:47.089 [2024-12-06 18:26:17.896634] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:47.089 [2024-12-06 18:26:17.896917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:27:47.089 [2024-12-06 18:26:17.897084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:27:47.089 [2024-12-06 18:26:17.897094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:27:47.089 [2024-12-06 18:26:17.897296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:47.089 "name": "raid_bdev1", 00:27:47.089 "uuid": "eaa3d3da-8128-4227-ab4d-3409aa24ed2a", 00:27:47.089 "strip_size_kb": 64, 00:27:47.089 "state": "online", 00:27:47.089 "raid_level": "raid0", 00:27:47.089 "superblock": true, 00:27:47.089 "num_base_bdevs": 3, 00:27:47.089 "num_base_bdevs_discovered": 3, 00:27:47.089 "num_base_bdevs_operational": 3, 00:27:47.089 "base_bdevs_list": [ 00:27:47.089 { 00:27:47.089 "name": "pt1", 00:27:47.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:47.089 "is_configured": true, 00:27:47.089 "data_offset": 2048, 00:27:47.089 "data_size": 63488 00:27:47.089 }, 00:27:47.089 { 00:27:47.089 "name": "pt2", 00:27:47.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:47.089 "is_configured": true, 00:27:47.089 "data_offset": 2048, 00:27:47.089 "data_size": 63488 00:27:47.089 }, 00:27:47.089 { 00:27:47.089 "name": "pt3", 00:27:47.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:47.089 "is_configured": true, 00:27:47.089 "data_offset": 2048, 00:27:47.089 "data_size": 63488 00:27:47.089 } 00:27:47.089 ] 00:27:47.089 }' 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:47.089 18:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.347 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.347 [2024-12-06 18:26:18.282169] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:47.605 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.605 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:47.605 "name": "raid_bdev1", 00:27:47.605 "aliases": [ 00:27:47.605 "eaa3d3da-8128-4227-ab4d-3409aa24ed2a" 00:27:47.605 ], 00:27:47.605 "product_name": "Raid Volume", 00:27:47.605 "block_size": 512, 00:27:47.605 "num_blocks": 190464, 00:27:47.605 "uuid": "eaa3d3da-8128-4227-ab4d-3409aa24ed2a", 00:27:47.605 "assigned_rate_limits": { 00:27:47.605 "rw_ios_per_sec": 0, 00:27:47.605 "rw_mbytes_per_sec": 0, 00:27:47.605 "r_mbytes_per_sec": 0, 00:27:47.605 "w_mbytes_per_sec": 0 00:27:47.605 }, 00:27:47.605 "claimed": false, 00:27:47.605 "zoned": false, 00:27:47.605 "supported_io_types": { 00:27:47.605 "read": true, 00:27:47.605 "write": true, 00:27:47.605 "unmap": true, 00:27:47.605 "flush": true, 00:27:47.605 "reset": true, 00:27:47.605 "nvme_admin": false, 00:27:47.605 "nvme_io": false, 00:27:47.605 "nvme_io_md": false, 00:27:47.605 "write_zeroes": true, 00:27:47.605 "zcopy": false, 00:27:47.605 "get_zone_info": false, 00:27:47.605 "zone_management": false, 00:27:47.605 "zone_append": false, 00:27:47.605 "compare": false, 00:27:47.605 "compare_and_write": false, 00:27:47.605 "abort": false, 00:27:47.605 "seek_hole": false, 00:27:47.605 "seek_data": false, 00:27:47.605 "copy": false, 00:27:47.605 "nvme_iov_md": false 00:27:47.605 }, 00:27:47.605 "memory_domains": [ 00:27:47.605 { 00:27:47.605 "dma_device_id": "system", 00:27:47.605 "dma_device_type": 1 00:27:47.605 }, 00:27:47.605 { 00:27:47.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.605 "dma_device_type": 2 00:27:47.605 }, 00:27:47.605 { 00:27:47.605 "dma_device_id": "system", 00:27:47.605 "dma_device_type": 1 00:27:47.605 }, 00:27:47.605 { 00:27:47.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.605 "dma_device_type": 2 00:27:47.605 }, 00:27:47.605 { 00:27:47.605 "dma_device_id": "system", 00:27:47.605 "dma_device_type": 1 00:27:47.605 }, 00:27:47.605 { 00:27:47.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:47.605 "dma_device_type": 2 00:27:47.605 } 00:27:47.605 ], 00:27:47.605 "driver_specific": { 00:27:47.605 "raid": { 00:27:47.605 "uuid": "eaa3d3da-8128-4227-ab4d-3409aa24ed2a", 00:27:47.605 "strip_size_kb": 64, 00:27:47.605 "state": "online", 00:27:47.605 "raid_level": "raid0", 00:27:47.605 "superblock": true, 00:27:47.605 "num_base_bdevs": 3, 00:27:47.605 "num_base_bdevs_discovered": 3, 00:27:47.605 "num_base_bdevs_operational": 3, 00:27:47.605 "base_bdevs_list": [ 00:27:47.605 { 00:27:47.605 "name": "pt1", 00:27:47.605 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:47.605 "is_configured": true, 00:27:47.605 "data_offset": 2048, 00:27:47.605 "data_size": 63488 00:27:47.605 }, 00:27:47.605 { 00:27:47.605 "name": "pt2", 00:27:47.606 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:47.606 "is_configured": true, 00:27:47.606 "data_offset": 2048, 00:27:47.606 "data_size": 63488 00:27:47.606 }, 00:27:47.606 { 00:27:47.606 "name": "pt3", 00:27:47.606 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:47.606 "is_configured": true, 00:27:47.606 "data_offset": 2048, 00:27:47.606 "data_size": 63488 00:27:47.606 } 00:27:47.606 ] 00:27:47.606 } 00:27:47.606 } 00:27:47.606 }' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:47.606 pt2 00:27:47.606 pt3' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.606 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 [2024-12-06 18:26:18.562133] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=eaa3d3da-8128-4227-ab4d-3409aa24ed2a 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z eaa3d3da-8128-4227-ab4d-3409aa24ed2a ']' 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 [2024-12-06 18:26:18.601829] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:47.865 [2024-12-06 18:26:18.601868] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:47.865 [2024-12-06 18:26:18.601968] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:47.865 [2024-12-06 18:26:18.602030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:47.865 [2024-12-06 18:26:18.602042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 [2024-12-06 18:26:18.761910] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:27:47.865 [2024-12-06 18:26:18.764295] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:27:47.865 [2024-12-06 18:26:18.764372] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:27:47.865 [2024-12-06 18:26:18.764431] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:27:47.865 [2024-12-06 18:26:18.764493] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:27:47.865 [2024-12-06 18:26:18.764518] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:27:47.865 [2024-12-06 18:26:18.764541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:47.865 [2024-12-06 18:26:18.764556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:27:47.865 request: 00:27:47.865 { 00:27:47.865 "name": "raid_bdev1", 00:27:47.865 "raid_level": "raid0", 00:27:47.865 "base_bdevs": [ 00:27:47.865 "malloc1", 00:27:47.865 "malloc2", 00:27:47.865 "malloc3" 00:27:47.865 ], 00:27:47.865 "strip_size_kb": 64, 00:27:47.865 "superblock": false, 00:27:47.865 "method": "bdev_raid_create", 00:27:47.865 "req_id": 1 00:27:47.865 } 00:27:47.865 Got JSON-RPC error response 00:27:47.865 response: 00:27:47.865 { 00:27:47.865 "code": -17, 00:27:47.865 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:27:47.865 } 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:47.865 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.124 [2024-12-06 18:26:18.825862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:27:48.124 [2024-12-06 18:26:18.825942] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.124 [2024-12-06 18:26:18.825986] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:27:48.124 [2024-12-06 18:26:18.825999] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.124 [2024-12-06 18:26:18.828781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.124 [2024-12-06 18:26:18.828835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:27:48.124 [2024-12-06 18:26:18.828939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:27:48.124 [2024-12-06 18:26:18.829004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:27:48.124 pt1 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:48.124 "name": "raid_bdev1", 00:27:48.124 "uuid": "eaa3d3da-8128-4227-ab4d-3409aa24ed2a", 00:27:48.124 "strip_size_kb": 64, 00:27:48.124 "state": "configuring", 00:27:48.124 "raid_level": "raid0", 00:27:48.124 "superblock": true, 00:27:48.124 "num_base_bdevs": 3, 00:27:48.124 "num_base_bdevs_discovered": 1, 00:27:48.124 "num_base_bdevs_operational": 3, 00:27:48.124 "base_bdevs_list": [ 00:27:48.124 { 00:27:48.124 "name": "pt1", 00:27:48.124 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:48.124 "is_configured": true, 00:27:48.124 "data_offset": 2048, 00:27:48.124 "data_size": 63488 00:27:48.124 }, 00:27:48.124 { 00:27:48.124 "name": null, 00:27:48.124 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:48.124 "is_configured": false, 00:27:48.124 "data_offset": 2048, 00:27:48.124 "data_size": 63488 00:27:48.124 }, 00:27:48.124 { 00:27:48.124 "name": null, 00:27:48.124 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:48.124 "is_configured": false, 00:27:48.124 "data_offset": 2048, 00:27:48.124 "data_size": 63488 00:27:48.124 } 00:27:48.124 ] 00:27:48.124 }' 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:48.124 18:26:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.382 [2024-12-06 18:26:19.257857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:48.382 [2024-12-06 18:26:19.258138] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.382 [2024-12-06 18:26:19.258192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:27:48.382 [2024-12-06 18:26:19.258206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.382 [2024-12-06 18:26:19.258688] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.382 [2024-12-06 18:26:19.258708] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:48.382 [2024-12-06 18:26:19.258810] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:48.382 [2024-12-06 18:26:19.258853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:48.382 pt2 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.382 [2024-12-06 18:26:19.269933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:48.382 "name": "raid_bdev1", 00:27:48.382 "uuid": "eaa3d3da-8128-4227-ab4d-3409aa24ed2a", 00:27:48.382 "strip_size_kb": 64, 00:27:48.382 "state": "configuring", 00:27:48.382 "raid_level": "raid0", 00:27:48.382 "superblock": true, 00:27:48.382 "num_base_bdevs": 3, 00:27:48.382 "num_base_bdevs_discovered": 1, 00:27:48.382 "num_base_bdevs_operational": 3, 00:27:48.382 "base_bdevs_list": [ 00:27:48.382 { 00:27:48.382 "name": "pt1", 00:27:48.382 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:48.382 "is_configured": true, 00:27:48.382 "data_offset": 2048, 00:27:48.382 "data_size": 63488 00:27:48.382 }, 00:27:48.382 { 00:27:48.382 "name": null, 00:27:48.382 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:48.382 "is_configured": false, 00:27:48.382 "data_offset": 0, 00:27:48.382 "data_size": 63488 00:27:48.382 }, 00:27:48.382 { 00:27:48.382 "name": null, 00:27:48.382 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:48.382 "is_configured": false, 00:27:48.382 "data_offset": 2048, 00:27:48.382 "data_size": 63488 00:27:48.382 } 00:27:48.382 ] 00:27:48.382 }' 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:48.382 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.947 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:27:48.947 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:48.947 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:27:48.947 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.947 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.947 [2024-12-06 18:26:19.745875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:27:48.947 [2024-12-06 18:26:19.745957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.947 [2024-12-06 18:26:19.745980] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:27:48.948 [2024-12-06 18:26:19.745995] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.948 [2024-12-06 18:26:19.746537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.948 [2024-12-06 18:26:19.746564] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:27:48.948 [2024-12-06 18:26:19.746657] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:27:48.948 [2024-12-06 18:26:19.746685] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:27:48.948 pt2 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.948 [2024-12-06 18:26:19.757831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:27:48.948 [2024-12-06 18:26:19.757905] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:48.948 [2024-12-06 18:26:19.757943] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:27:48.948 [2024-12-06 18:26:19.757959] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:48.948 [2024-12-06 18:26:19.758450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:48.948 [2024-12-06 18:26:19.758479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:27:48.948 [2024-12-06 18:26:19.758565] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:27:48.948 [2024-12-06 18:26:19.758592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:27:48.948 [2024-12-06 18:26:19.758714] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:27:48.948 [2024-12-06 18:26:19.758729] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:48.948 [2024-12-06 18:26:19.759006] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:27:48.948 [2024-12-06 18:26:19.759187] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:27:48.948 [2024-12-06 18:26:19.759211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:27:48.948 [2024-12-06 18:26:19.759400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:48.948 pt3 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:48.948 "name": "raid_bdev1", 00:27:48.948 "uuid": "eaa3d3da-8128-4227-ab4d-3409aa24ed2a", 00:27:48.948 "strip_size_kb": 64, 00:27:48.948 "state": "online", 00:27:48.948 "raid_level": "raid0", 00:27:48.948 "superblock": true, 00:27:48.948 "num_base_bdevs": 3, 00:27:48.948 "num_base_bdevs_discovered": 3, 00:27:48.948 "num_base_bdevs_operational": 3, 00:27:48.948 "base_bdevs_list": [ 00:27:48.948 { 00:27:48.948 "name": "pt1", 00:27:48.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:48.948 "is_configured": true, 00:27:48.948 "data_offset": 2048, 00:27:48.948 "data_size": 63488 00:27:48.948 }, 00:27:48.948 { 00:27:48.948 "name": "pt2", 00:27:48.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:48.948 "is_configured": true, 00:27:48.948 "data_offset": 2048, 00:27:48.948 "data_size": 63488 00:27:48.948 }, 00:27:48.948 { 00:27:48.948 "name": "pt3", 00:27:48.948 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:48.948 "is_configured": true, 00:27:48.948 "data_offset": 2048, 00:27:48.948 "data_size": 63488 00:27:48.948 } 00:27:48.948 ] 00:27:48.948 }' 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:48.948 18:26:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.515 [2024-12-06 18:26:20.230212] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:27:49.515 "name": "raid_bdev1", 00:27:49.515 "aliases": [ 00:27:49.515 "eaa3d3da-8128-4227-ab4d-3409aa24ed2a" 00:27:49.515 ], 00:27:49.515 "product_name": "Raid Volume", 00:27:49.515 "block_size": 512, 00:27:49.515 "num_blocks": 190464, 00:27:49.515 "uuid": "eaa3d3da-8128-4227-ab4d-3409aa24ed2a", 00:27:49.515 "assigned_rate_limits": { 00:27:49.515 "rw_ios_per_sec": 0, 00:27:49.515 "rw_mbytes_per_sec": 0, 00:27:49.515 "r_mbytes_per_sec": 0, 00:27:49.515 "w_mbytes_per_sec": 0 00:27:49.515 }, 00:27:49.515 "claimed": false, 00:27:49.515 "zoned": false, 00:27:49.515 "supported_io_types": { 00:27:49.515 "read": true, 00:27:49.515 "write": true, 00:27:49.515 "unmap": true, 00:27:49.515 "flush": true, 00:27:49.515 "reset": true, 00:27:49.515 "nvme_admin": false, 00:27:49.515 "nvme_io": false, 00:27:49.515 "nvme_io_md": false, 00:27:49.515 "write_zeroes": true, 00:27:49.515 "zcopy": false, 00:27:49.515 "get_zone_info": false, 00:27:49.515 "zone_management": false, 00:27:49.515 "zone_append": false, 00:27:49.515 "compare": false, 00:27:49.515 "compare_and_write": false, 00:27:49.515 "abort": false, 00:27:49.515 "seek_hole": false, 00:27:49.515 "seek_data": false, 00:27:49.515 "copy": false, 00:27:49.515 "nvme_iov_md": false 00:27:49.515 }, 00:27:49.515 "memory_domains": [ 00:27:49.515 { 00:27:49.515 "dma_device_id": "system", 00:27:49.515 "dma_device_type": 1 00:27:49.515 }, 00:27:49.515 { 00:27:49.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.515 "dma_device_type": 2 00:27:49.515 }, 00:27:49.515 { 00:27:49.515 "dma_device_id": "system", 00:27:49.515 "dma_device_type": 1 00:27:49.515 }, 00:27:49.515 { 00:27:49.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.515 "dma_device_type": 2 00:27:49.515 }, 00:27:49.515 { 00:27:49.515 "dma_device_id": "system", 00:27:49.515 "dma_device_type": 1 00:27:49.515 }, 00:27:49.515 { 00:27:49.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:27:49.515 "dma_device_type": 2 00:27:49.515 } 00:27:49.515 ], 00:27:49.515 "driver_specific": { 00:27:49.515 "raid": { 00:27:49.515 "uuid": "eaa3d3da-8128-4227-ab4d-3409aa24ed2a", 00:27:49.515 "strip_size_kb": 64, 00:27:49.515 "state": "online", 00:27:49.515 "raid_level": "raid0", 00:27:49.515 "superblock": true, 00:27:49.515 "num_base_bdevs": 3, 00:27:49.515 "num_base_bdevs_discovered": 3, 00:27:49.515 "num_base_bdevs_operational": 3, 00:27:49.515 "base_bdevs_list": [ 00:27:49.515 { 00:27:49.515 "name": "pt1", 00:27:49.515 "uuid": "00000000-0000-0000-0000-000000000001", 00:27:49.515 "is_configured": true, 00:27:49.515 "data_offset": 2048, 00:27:49.515 "data_size": 63488 00:27:49.515 }, 00:27:49.515 { 00:27:49.515 "name": "pt2", 00:27:49.515 "uuid": "00000000-0000-0000-0000-000000000002", 00:27:49.515 "is_configured": true, 00:27:49.515 "data_offset": 2048, 00:27:49.515 "data_size": 63488 00:27:49.515 }, 00:27:49.515 { 00:27:49.515 "name": "pt3", 00:27:49.515 "uuid": "00000000-0000-0000-0000-000000000003", 00:27:49.515 "is_configured": true, 00:27:49.515 "data_offset": 2048, 00:27:49.515 "data_size": 63488 00:27:49.515 } 00:27:49.515 ] 00:27:49.515 } 00:27:49.515 } 00:27:49.515 }' 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:27:49.515 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:27:49.515 pt2 00:27:49.515 pt3' 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.516 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:49.774 [2024-12-06 18:26:20.506189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' eaa3d3da-8128-4227-ab4d-3409aa24ed2a '!=' eaa3d3da-8128-4227-ab4d-3409aa24ed2a ']' 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64804 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64804 ']' 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64804 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64804 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64804' 00:27:49.774 killing process with pid 64804 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64804 00:27:49.774 [2024-12-06 18:26:20.596517] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:49.774 [2024-12-06 18:26:20.596643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:49.774 18:26:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64804 00:27:49.774 [2024-12-06 18:26:20.596725] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:49.774 [2024-12-06 18:26:20.596741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:27:50.033 [2024-12-06 18:26:20.925687] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:51.409 18:26:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:27:51.409 ************************************ 00:27:51.409 END TEST raid_superblock_test 00:27:51.409 ************************************ 00:27:51.409 00:27:51.409 real 0m5.415s 00:27:51.409 user 0m7.676s 00:27:51.409 sys 0m1.088s 00:27:51.409 18:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.409 18:26:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.409 18:26:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:27:51.409 18:26:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:51.409 18:26:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.409 18:26:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:51.409 ************************************ 00:27:51.409 START TEST raid_read_error_test 00:27:51.409 ************************************ 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7GYQpZ5Xnx 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65063 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65063 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65063 ']' 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.409 18:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:51.409 [2024-12-06 18:26:22.354901] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:51.668 [2024-12-06 18:26:22.355283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65063 ] 00:27:51.668 [2024-12-06 18:26:22.540476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.926 [2024-12-06 18:26:22.667807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:52.186 [2024-12-06 18:26:22.890532] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:52.186 [2024-12-06 18:26:22.890776] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.446 BaseBdev1_malloc 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.446 true 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.446 [2024-12-06 18:26:23.309795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:52.446 [2024-12-06 18:26:23.310110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.446 [2024-12-06 18:26:23.310183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:52.446 [2024-12-06 18:26:23.310203] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.446 [2024-12-06 18:26:23.313173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.446 [2024-12-06 18:26:23.313389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:52.446 BaseBdev1 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.446 BaseBdev2_malloc 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.446 true 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.446 [2024-12-06 18:26:23.382654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:52.446 [2024-12-06 18:26:23.382736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.446 [2024-12-06 18:26:23.382760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:52.446 [2024-12-06 18:26:23.382775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.446 [2024-12-06 18:26:23.385456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.446 [2024-12-06 18:26:23.385507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:52.446 BaseBdev2 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.446 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.706 BaseBdev3_malloc 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.706 true 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.706 [2024-12-06 18:26:23.468081] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:52.706 [2024-12-06 18:26:23.468367] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:52.706 [2024-12-06 18:26:23.468407] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:52.706 [2024-12-06 18:26:23.468424] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:52.706 [2024-12-06 18:26:23.471394] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:52.706 [2024-12-06 18:26:23.471581] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:52.706 BaseBdev3 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.706 [2024-12-06 18:26:23.480378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:52.706 [2024-12-06 18:26:23.482692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:52.706 [2024-12-06 18:26:23.482770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:52.706 [2024-12-06 18:26:23.482995] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:52.706 [2024-12-06 18:26:23.483011] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:52.706 [2024-12-06 18:26:23.483350] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:27:52.706 [2024-12-06 18:26:23.483547] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:52.706 [2024-12-06 18:26:23.483565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:52.706 [2024-12-06 18:26:23.483754] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:52.706 "name": "raid_bdev1", 00:27:52.706 "uuid": "9cc4f646-62fa-43e7-b9ef-db14f5ab7c75", 00:27:52.706 "strip_size_kb": 64, 00:27:52.706 "state": "online", 00:27:52.706 "raid_level": "raid0", 00:27:52.706 "superblock": true, 00:27:52.706 "num_base_bdevs": 3, 00:27:52.706 "num_base_bdevs_discovered": 3, 00:27:52.706 "num_base_bdevs_operational": 3, 00:27:52.706 "base_bdevs_list": [ 00:27:52.706 { 00:27:52.706 "name": "BaseBdev1", 00:27:52.706 "uuid": "7164c6f2-f020-569c-8886-51af73e5cb8c", 00:27:52.706 "is_configured": true, 00:27:52.706 "data_offset": 2048, 00:27:52.706 "data_size": 63488 00:27:52.706 }, 00:27:52.706 { 00:27:52.706 "name": "BaseBdev2", 00:27:52.706 "uuid": "7f68b357-81c2-522a-a5fd-cf54108a4b3e", 00:27:52.706 "is_configured": true, 00:27:52.706 "data_offset": 2048, 00:27:52.706 "data_size": 63488 00:27:52.706 }, 00:27:52.706 { 00:27:52.706 "name": "BaseBdev3", 00:27:52.706 "uuid": "f8df1b24-306f-5f68-ae41-f9024404b369", 00:27:52.706 "is_configured": true, 00:27:52.706 "data_offset": 2048, 00:27:52.706 "data_size": 63488 00:27:52.706 } 00:27:52.706 ] 00:27:52.706 }' 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:52.706 18:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:53.274 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:53.274 18:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:53.274 [2024-12-06 18:26:24.040932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.210 18:26:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.210 18:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:54.210 "name": "raid_bdev1", 00:27:54.210 "uuid": "9cc4f646-62fa-43e7-b9ef-db14f5ab7c75", 00:27:54.210 "strip_size_kb": 64, 00:27:54.210 "state": "online", 00:27:54.210 "raid_level": "raid0", 00:27:54.210 "superblock": true, 00:27:54.210 "num_base_bdevs": 3, 00:27:54.210 "num_base_bdevs_discovered": 3, 00:27:54.210 "num_base_bdevs_operational": 3, 00:27:54.210 "base_bdevs_list": [ 00:27:54.210 { 00:27:54.210 "name": "BaseBdev1", 00:27:54.210 "uuid": "7164c6f2-f020-569c-8886-51af73e5cb8c", 00:27:54.210 "is_configured": true, 00:27:54.210 "data_offset": 2048, 00:27:54.210 "data_size": 63488 00:27:54.210 }, 00:27:54.210 { 00:27:54.210 "name": "BaseBdev2", 00:27:54.210 "uuid": "7f68b357-81c2-522a-a5fd-cf54108a4b3e", 00:27:54.210 "is_configured": true, 00:27:54.210 "data_offset": 2048, 00:27:54.210 "data_size": 63488 00:27:54.210 }, 00:27:54.210 { 00:27:54.210 "name": "BaseBdev3", 00:27:54.210 "uuid": "f8df1b24-306f-5f68-ae41-f9024404b369", 00:27:54.210 "is_configured": true, 00:27:54.210 "data_offset": 2048, 00:27:54.210 "data_size": 63488 00:27:54.210 } 00:27:54.210 ] 00:27:54.210 }' 00:27:54.210 18:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:54.210 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.469 18:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:54.469 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.469 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:54.469 [2024-12-06 18:26:25.406045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:54.469 [2024-12-06 18:26:25.406077] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:54.469 [2024-12-06 18:26:25.408907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:54.469 [2024-12-06 18:26:25.409064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:54.469 [2024-12-06 18:26:25.409141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:54.469 [2024-12-06 18:26:25.409267] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:54.469 { 00:27:54.469 "results": [ 00:27:54.469 { 00:27:54.469 "job": "raid_bdev1", 00:27:54.469 "core_mask": "0x1", 00:27:54.469 "workload": "randrw", 00:27:54.469 "percentage": 50, 00:27:54.469 "status": "finished", 00:27:54.469 "queue_depth": 1, 00:27:54.469 "io_size": 131072, 00:27:54.469 "runtime": 1.364794, 00:27:54.469 "iops": 14833.740476584744, 00:27:54.469 "mibps": 1854.217559573093, 00:27:54.469 "io_failed": 1, 00:27:54.469 "io_timeout": 0, 00:27:54.469 "avg_latency_us": 93.43221222338728, 00:27:54.469 "min_latency_us": 21.384738955823295, 00:27:54.469 "max_latency_us": 3158.3614457831327 00:27:54.469 } 00:27:54.469 ], 00:27:54.469 "core_count": 1 00:27:54.469 } 00:27:54.469 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.469 18:26:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65063 00:27:54.469 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65063 ']' 00:27:54.469 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65063 00:27:54.727 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:27:54.727 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.727 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65063 00:27:54.727 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:54.727 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:54.727 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65063' 00:27:54.727 killing process with pid 65063 00:27:54.727 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65063 00:27:54.727 [2024-12-06 18:26:25.463052] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:54.727 18:26:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65063 00:27:54.986 [2024-12-06 18:26:25.696309] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7GYQpZ5Xnx 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:27:56.366 00:27:56.366 real 0m4.691s 00:27:56.366 user 0m5.568s 00:27:56.366 sys 0m0.656s 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.366 ************************************ 00:27:56.366 END TEST raid_read_error_test 00:27:56.366 ************************************ 00:27:56.366 18:26:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.366 18:26:26 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:27:56.366 18:26:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:27:56.366 18:26:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.366 18:26:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:27:56.366 ************************************ 00:27:56.366 START TEST raid_write_error_test 00:27:56.366 ************************************ 00:27:56.366 18:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:27:56.366 18:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:27:56.366 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:27:56.366 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:27:56.366 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:27:56.366 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.366 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:27:56.366 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.yH53ig5vyr 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65203 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65203 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65203 ']' 00:27:56.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:56.367 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:56.367 [2024-12-06 18:26:27.134357] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:27:56.367 [2024-12-06 18:26:27.134726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65203 ] 00:27:56.627 [2024-12-06 18:26:27.325351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.627 [2024-12-06 18:26:27.449179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:56.886 [2024-12-06 18:26:27.658094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:56.886 [2024-12-06 18:26:27.658394] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:27:57.146 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:57.146 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:27:57.146 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:57.146 18:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:27:57.146 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.146 18:26:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.146 BaseBdev1_malloc 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.146 true 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.146 [2024-12-06 18:26:28.053360] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:27:57.146 [2024-12-06 18:26:28.053547] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.146 [2024-12-06 18:26:28.053580] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:27:57.146 [2024-12-06 18:26:28.053595] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.146 [2024-12-06 18:26:28.056105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.146 [2024-12-06 18:26:28.056167] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:27:57.146 BaseBdev1 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.146 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.406 BaseBdev2_malloc 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.406 true 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.406 [2024-12-06 18:26:28.120913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:27:57.406 [2024-12-06 18:26:28.121113] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.406 [2024-12-06 18:26:28.121142] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:27:57.406 [2024-12-06 18:26:28.121189] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.406 [2024-12-06 18:26:28.123816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.406 [2024-12-06 18:26:28.123864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:27:57.406 BaseBdev2 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.406 BaseBdev3_malloc 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.406 true 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.406 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.406 [2024-12-06 18:26:28.199996] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:27:57.406 [2024-12-06 18:26:28.200181] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:27:57.406 [2024-12-06 18:26:28.200212] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:27:57.406 [2024-12-06 18:26:28.200227] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:27:57.406 [2024-12-06 18:26:28.202680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:27:57.406 [2024-12-06 18:26:28.202729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:27:57.406 BaseBdev3 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.407 [2024-12-06 18:26:28.212065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:27:57.407 [2024-12-06 18:26:28.214209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:27:57.407 [2024-12-06 18:26:28.214289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:27:57.407 [2024-12-06 18:26:28.214485] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:27:57.407 [2024-12-06 18:26:28.214501] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:27:57.407 [2024-12-06 18:26:28.214807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:27:57.407 [2024-12-06 18:26:28.214957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:27:57.407 [2024-12-06 18:26:28.214973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:27:57.407 [2024-12-06 18:26:28.215123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:57.407 "name": "raid_bdev1", 00:27:57.407 "uuid": "eb77cc5e-e201-4e41-8bab-ce7b28a055aa", 00:27:57.407 "strip_size_kb": 64, 00:27:57.407 "state": "online", 00:27:57.407 "raid_level": "raid0", 00:27:57.407 "superblock": true, 00:27:57.407 "num_base_bdevs": 3, 00:27:57.407 "num_base_bdevs_discovered": 3, 00:27:57.407 "num_base_bdevs_operational": 3, 00:27:57.407 "base_bdevs_list": [ 00:27:57.407 { 00:27:57.407 "name": "BaseBdev1", 00:27:57.407 "uuid": "88b1c093-5f7b-5829-bc9c-233cfcbb802f", 00:27:57.407 "is_configured": true, 00:27:57.407 "data_offset": 2048, 00:27:57.407 "data_size": 63488 00:27:57.407 }, 00:27:57.407 { 00:27:57.407 "name": "BaseBdev2", 00:27:57.407 "uuid": "c4903328-4594-589d-9862-a2f12200f872", 00:27:57.407 "is_configured": true, 00:27:57.407 "data_offset": 2048, 00:27:57.407 "data_size": 63488 00:27:57.407 }, 00:27:57.407 { 00:27:57.407 "name": "BaseBdev3", 00:27:57.407 "uuid": "aa018e55-63ef-5366-a94e-b35a544e238a", 00:27:57.407 "is_configured": true, 00:27:57.407 "data_offset": 2048, 00:27:57.407 "data_size": 63488 00:27:57.407 } 00:27:57.407 ] 00:27:57.407 }' 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:57.407 18:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:57.975 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:27:57.975 18:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:27:57.975 [2024-12-06 18:26:28.768628] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:27:58.913 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:27:58.914 18:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:58.914 18:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:58.914 18:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:58.914 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:27:58.914 "name": "raid_bdev1", 00:27:58.914 "uuid": "eb77cc5e-e201-4e41-8bab-ce7b28a055aa", 00:27:58.914 "strip_size_kb": 64, 00:27:58.914 "state": "online", 00:27:58.914 "raid_level": "raid0", 00:27:58.914 "superblock": true, 00:27:58.914 "num_base_bdevs": 3, 00:27:58.914 "num_base_bdevs_discovered": 3, 00:27:58.914 "num_base_bdevs_operational": 3, 00:27:58.914 "base_bdevs_list": [ 00:27:58.914 { 00:27:58.914 "name": "BaseBdev1", 00:27:58.914 "uuid": "88b1c093-5f7b-5829-bc9c-233cfcbb802f", 00:27:58.914 "is_configured": true, 00:27:58.914 "data_offset": 2048, 00:27:58.914 "data_size": 63488 00:27:58.914 }, 00:27:58.914 { 00:27:58.914 "name": "BaseBdev2", 00:27:58.914 "uuid": "c4903328-4594-589d-9862-a2f12200f872", 00:27:58.914 "is_configured": true, 00:27:58.914 "data_offset": 2048, 00:27:58.914 "data_size": 63488 00:27:58.914 }, 00:27:58.914 { 00:27:58.914 "name": "BaseBdev3", 00:27:58.914 "uuid": "aa018e55-63ef-5366-a94e-b35a544e238a", 00:27:58.914 "is_configured": true, 00:27:58.914 "data_offset": 2048, 00:27:58.914 "data_size": 63488 00:27:58.914 } 00:27:58.914 ] 00:27:58.914 }' 00:27:58.914 18:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:27:58.914 18:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.173 18:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:27:59.173 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.173 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:27:59.173 [2024-12-06 18:26:30.109339] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:27:59.173 [2024-12-06 18:26:30.109370] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:27:59.173 [2024-12-06 18:26:30.112100] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:27:59.173 [2024-12-06 18:26:30.112330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:27:59.173 [2024-12-06 18:26:30.112393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:27:59.173 [2024-12-06 18:26:30.112407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:27:59.173 { 00:27:59.173 "results": [ 00:27:59.173 { 00:27:59.173 "job": "raid_bdev1", 00:27:59.173 "core_mask": "0x1", 00:27:59.173 "workload": "randrw", 00:27:59.173 "percentage": 50, 00:27:59.173 "status": "finished", 00:27:59.173 "queue_depth": 1, 00:27:59.173 "io_size": 131072, 00:27:59.173 "runtime": 1.340669, 00:27:59.173 "iops": 15586.994254361069, 00:27:59.173 "mibps": 1948.3742817951336, 00:27:59.173 "io_failed": 1, 00:27:59.173 "io_timeout": 0, 00:27:59.173 "avg_latency_us": 88.38233761921069, 00:27:59.173 "min_latency_us": 27.347791164658634, 00:27:59.173 "max_latency_us": 1651.5598393574296 00:27:59.173 } 00:27:59.173 ], 00:27:59.173 "core_count": 1 00:27:59.173 } 00:27:59.173 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.173 18:26:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65203 00:27:59.173 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65203 ']' 00:27:59.173 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65203 00:27:59.173 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:27:59.432 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:59.432 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65203 00:27:59.432 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:59.432 killing process with pid 65203 00:27:59.432 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:59.432 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65203' 00:27:59.432 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65203 00:27:59.432 [2024-12-06 18:26:30.166731] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:27:59.432 18:26:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65203 00:27:59.691 [2024-12-06 18:26:30.400834] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.yH53ig5vyr 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:28:01.071 ************************************ 00:28:01.071 END TEST raid_write_error_test 00:28:01.071 ************************************ 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:28:01.071 00:28:01.071 real 0m4.633s 00:28:01.071 user 0m5.429s 00:28:01.071 sys 0m0.682s 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:01.071 18:26:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.071 18:26:31 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:28:01.071 18:26:31 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:28:01.071 18:26:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:01.071 18:26:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:01.071 18:26:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:01.071 ************************************ 00:28:01.071 START TEST raid_state_function_test 00:28:01.071 ************************************ 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65352 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65352' 00:28:01.071 Process raid pid: 65352 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65352 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65352 ']' 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:01.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:01.071 18:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.071 [2024-12-06 18:26:31.817805] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:01.071 [2024-12-06 18:26:31.818123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:01.071 [2024-12-06 18:26:31.994456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.330 [2024-12-06 18:26:32.111494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:01.587 [2024-12-06 18:26:32.328206] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:01.587 [2024-12-06 18:26:32.328460] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.846 [2024-12-06 18:26:32.650996] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:01.846 [2024-12-06 18:26:32.651070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:01.846 [2024-12-06 18:26:32.651088] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:01.846 [2024-12-06 18:26:32.651108] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:01.846 [2024-12-06 18:26:32.651120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:01.846 [2024-12-06 18:26:32.651139] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.846 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:01.847 "name": "Existed_Raid", 00:28:01.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.847 "strip_size_kb": 64, 00:28:01.847 "state": "configuring", 00:28:01.847 "raid_level": "concat", 00:28:01.847 "superblock": false, 00:28:01.847 "num_base_bdevs": 3, 00:28:01.847 "num_base_bdevs_discovered": 0, 00:28:01.847 "num_base_bdevs_operational": 3, 00:28:01.847 "base_bdevs_list": [ 00:28:01.847 { 00:28:01.847 "name": "BaseBdev1", 00:28:01.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.847 "is_configured": false, 00:28:01.847 "data_offset": 0, 00:28:01.847 "data_size": 0 00:28:01.847 }, 00:28:01.847 { 00:28:01.847 "name": "BaseBdev2", 00:28:01.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.847 "is_configured": false, 00:28:01.847 "data_offset": 0, 00:28:01.847 "data_size": 0 00:28:01.847 }, 00:28:01.847 { 00:28:01.847 "name": "BaseBdev3", 00:28:01.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:01.847 "is_configured": false, 00:28:01.847 "data_offset": 0, 00:28:01.847 "data_size": 0 00:28:01.847 } 00:28:01.847 ] 00:28:01.847 }' 00:28:01.847 18:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:01.847 18:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.414 [2024-12-06 18:26:33.086340] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:02.414 [2024-12-06 18:26:33.086535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.414 [2024-12-06 18:26:33.098311] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:02.414 [2024-12-06 18:26:33.098360] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:02.414 [2024-12-06 18:26:33.098371] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:02.414 [2024-12-06 18:26:33.098401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:02.414 [2024-12-06 18:26:33.098409] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:02.414 [2024-12-06 18:26:33.098422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.414 [2024-12-06 18:26:33.144718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:02.414 BaseBdev1 00:28:02.414 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.415 [ 00:28:02.415 { 00:28:02.415 "name": "BaseBdev1", 00:28:02.415 "aliases": [ 00:28:02.415 "4d3b011e-b503-49ef-b5f8-89c715cb5e3a" 00:28:02.415 ], 00:28:02.415 "product_name": "Malloc disk", 00:28:02.415 "block_size": 512, 00:28:02.415 "num_blocks": 65536, 00:28:02.415 "uuid": "4d3b011e-b503-49ef-b5f8-89c715cb5e3a", 00:28:02.415 "assigned_rate_limits": { 00:28:02.415 "rw_ios_per_sec": 0, 00:28:02.415 "rw_mbytes_per_sec": 0, 00:28:02.415 "r_mbytes_per_sec": 0, 00:28:02.415 "w_mbytes_per_sec": 0 00:28:02.415 }, 00:28:02.415 "claimed": true, 00:28:02.415 "claim_type": "exclusive_write", 00:28:02.415 "zoned": false, 00:28:02.415 "supported_io_types": { 00:28:02.415 "read": true, 00:28:02.415 "write": true, 00:28:02.415 "unmap": true, 00:28:02.415 "flush": true, 00:28:02.415 "reset": true, 00:28:02.415 "nvme_admin": false, 00:28:02.415 "nvme_io": false, 00:28:02.415 "nvme_io_md": false, 00:28:02.415 "write_zeroes": true, 00:28:02.415 "zcopy": true, 00:28:02.415 "get_zone_info": false, 00:28:02.415 "zone_management": false, 00:28:02.415 "zone_append": false, 00:28:02.415 "compare": false, 00:28:02.415 "compare_and_write": false, 00:28:02.415 "abort": true, 00:28:02.415 "seek_hole": false, 00:28:02.415 "seek_data": false, 00:28:02.415 "copy": true, 00:28:02.415 "nvme_iov_md": false 00:28:02.415 }, 00:28:02.415 "memory_domains": [ 00:28:02.415 { 00:28:02.415 "dma_device_id": "system", 00:28:02.415 "dma_device_type": 1 00:28:02.415 }, 00:28:02.415 { 00:28:02.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:02.415 "dma_device_type": 2 00:28:02.415 } 00:28:02.415 ], 00:28:02.415 "driver_specific": {} 00:28:02.415 } 00:28:02.415 ] 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.415 "name": "Existed_Raid", 00:28:02.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.415 "strip_size_kb": 64, 00:28:02.415 "state": "configuring", 00:28:02.415 "raid_level": "concat", 00:28:02.415 "superblock": false, 00:28:02.415 "num_base_bdevs": 3, 00:28:02.415 "num_base_bdevs_discovered": 1, 00:28:02.415 "num_base_bdevs_operational": 3, 00:28:02.415 "base_bdevs_list": [ 00:28:02.415 { 00:28:02.415 "name": "BaseBdev1", 00:28:02.415 "uuid": "4d3b011e-b503-49ef-b5f8-89c715cb5e3a", 00:28:02.415 "is_configured": true, 00:28:02.415 "data_offset": 0, 00:28:02.415 "data_size": 65536 00:28:02.415 }, 00:28:02.415 { 00:28:02.415 "name": "BaseBdev2", 00:28:02.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.415 "is_configured": false, 00:28:02.415 "data_offset": 0, 00:28:02.415 "data_size": 0 00:28:02.415 }, 00:28:02.415 { 00:28:02.415 "name": "BaseBdev3", 00:28:02.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.415 "is_configured": false, 00:28:02.415 "data_offset": 0, 00:28:02.415 "data_size": 0 00:28:02.415 } 00:28:02.415 ] 00:28:02.415 }' 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.415 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.982 [2024-12-06 18:26:33.664142] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:02.982 [2024-12-06 18:26:33.664207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.982 [2024-12-06 18:26:33.676192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:02.982 [2024-12-06 18:26:33.678397] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:02.982 [2024-12-06 18:26:33.678448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:02.982 [2024-12-06 18:26:33.678459] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:02.982 [2024-12-06 18:26:33.678472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:02.982 "name": "Existed_Raid", 00:28:02.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.982 "strip_size_kb": 64, 00:28:02.982 "state": "configuring", 00:28:02.982 "raid_level": "concat", 00:28:02.982 "superblock": false, 00:28:02.982 "num_base_bdevs": 3, 00:28:02.982 "num_base_bdevs_discovered": 1, 00:28:02.982 "num_base_bdevs_operational": 3, 00:28:02.982 "base_bdevs_list": [ 00:28:02.982 { 00:28:02.982 "name": "BaseBdev1", 00:28:02.982 "uuid": "4d3b011e-b503-49ef-b5f8-89c715cb5e3a", 00:28:02.982 "is_configured": true, 00:28:02.982 "data_offset": 0, 00:28:02.982 "data_size": 65536 00:28:02.982 }, 00:28:02.982 { 00:28:02.982 "name": "BaseBdev2", 00:28:02.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.982 "is_configured": false, 00:28:02.982 "data_offset": 0, 00:28:02.982 "data_size": 0 00:28:02.982 }, 00:28:02.982 { 00:28:02.982 "name": "BaseBdev3", 00:28:02.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:02.982 "is_configured": false, 00:28:02.982 "data_offset": 0, 00:28:02.982 "data_size": 0 00:28:02.982 } 00:28:02.982 ] 00:28:02.982 }' 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:02.982 18:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.255 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:03.255 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.255 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.515 [2024-12-06 18:26:34.243583] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:03.515 BaseBdev2 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.515 [ 00:28:03.515 { 00:28:03.515 "name": "BaseBdev2", 00:28:03.515 "aliases": [ 00:28:03.515 "a0991053-77af-48e2-b370-cea68c119d23" 00:28:03.515 ], 00:28:03.515 "product_name": "Malloc disk", 00:28:03.515 "block_size": 512, 00:28:03.515 "num_blocks": 65536, 00:28:03.515 "uuid": "a0991053-77af-48e2-b370-cea68c119d23", 00:28:03.515 "assigned_rate_limits": { 00:28:03.515 "rw_ios_per_sec": 0, 00:28:03.515 "rw_mbytes_per_sec": 0, 00:28:03.515 "r_mbytes_per_sec": 0, 00:28:03.515 "w_mbytes_per_sec": 0 00:28:03.515 }, 00:28:03.515 "claimed": true, 00:28:03.515 "claim_type": "exclusive_write", 00:28:03.515 "zoned": false, 00:28:03.515 "supported_io_types": { 00:28:03.515 "read": true, 00:28:03.515 "write": true, 00:28:03.515 "unmap": true, 00:28:03.515 "flush": true, 00:28:03.515 "reset": true, 00:28:03.515 "nvme_admin": false, 00:28:03.515 "nvme_io": false, 00:28:03.515 "nvme_io_md": false, 00:28:03.515 "write_zeroes": true, 00:28:03.515 "zcopy": true, 00:28:03.515 "get_zone_info": false, 00:28:03.515 "zone_management": false, 00:28:03.515 "zone_append": false, 00:28:03.515 "compare": false, 00:28:03.515 "compare_and_write": false, 00:28:03.515 "abort": true, 00:28:03.515 "seek_hole": false, 00:28:03.515 "seek_data": false, 00:28:03.515 "copy": true, 00:28:03.515 "nvme_iov_md": false 00:28:03.515 }, 00:28:03.515 "memory_domains": [ 00:28:03.515 { 00:28:03.515 "dma_device_id": "system", 00:28:03.515 "dma_device_type": 1 00:28:03.515 }, 00:28:03.515 { 00:28:03.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:03.515 "dma_device_type": 2 00:28:03.515 } 00:28:03.515 ], 00:28:03.515 "driver_specific": {} 00:28:03.515 } 00:28:03.515 ] 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:03.515 "name": "Existed_Raid", 00:28:03.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.515 "strip_size_kb": 64, 00:28:03.515 "state": "configuring", 00:28:03.515 "raid_level": "concat", 00:28:03.515 "superblock": false, 00:28:03.515 "num_base_bdevs": 3, 00:28:03.515 "num_base_bdevs_discovered": 2, 00:28:03.515 "num_base_bdevs_operational": 3, 00:28:03.515 "base_bdevs_list": [ 00:28:03.515 { 00:28:03.515 "name": "BaseBdev1", 00:28:03.515 "uuid": "4d3b011e-b503-49ef-b5f8-89c715cb5e3a", 00:28:03.515 "is_configured": true, 00:28:03.515 "data_offset": 0, 00:28:03.515 "data_size": 65536 00:28:03.515 }, 00:28:03.515 { 00:28:03.515 "name": "BaseBdev2", 00:28:03.515 "uuid": "a0991053-77af-48e2-b370-cea68c119d23", 00:28:03.515 "is_configured": true, 00:28:03.515 "data_offset": 0, 00:28:03.515 "data_size": 65536 00:28:03.515 }, 00:28:03.515 { 00:28:03.515 "name": "BaseBdev3", 00:28:03.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:03.515 "is_configured": false, 00:28:03.515 "data_offset": 0, 00:28:03.515 "data_size": 0 00:28:03.515 } 00:28:03.515 ] 00:28:03.515 }' 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:03.515 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:03.774 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:03.774 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.774 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.033 [2024-12-06 18:26:34.758763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:04.033 [2024-12-06 18:26:34.758809] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:04.033 [2024-12-06 18:26:34.758824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:04.033 [2024-12-06 18:26:34.759107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:04.033 [2024-12-06 18:26:34.759321] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:04.033 [2024-12-06 18:26:34.759334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:04.033 BaseBdev3 00:28:04.033 [2024-12-06 18:26:34.759602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.033 [ 00:28:04.033 { 00:28:04.033 "name": "BaseBdev3", 00:28:04.033 "aliases": [ 00:28:04.033 "b4db2fb2-db7b-43db-b673-a2ef345c6c39" 00:28:04.033 ], 00:28:04.033 "product_name": "Malloc disk", 00:28:04.033 "block_size": 512, 00:28:04.033 "num_blocks": 65536, 00:28:04.033 "uuid": "b4db2fb2-db7b-43db-b673-a2ef345c6c39", 00:28:04.033 "assigned_rate_limits": { 00:28:04.033 "rw_ios_per_sec": 0, 00:28:04.033 "rw_mbytes_per_sec": 0, 00:28:04.033 "r_mbytes_per_sec": 0, 00:28:04.033 "w_mbytes_per_sec": 0 00:28:04.033 }, 00:28:04.033 "claimed": true, 00:28:04.033 "claim_type": "exclusive_write", 00:28:04.033 "zoned": false, 00:28:04.033 "supported_io_types": { 00:28:04.033 "read": true, 00:28:04.033 "write": true, 00:28:04.033 "unmap": true, 00:28:04.033 "flush": true, 00:28:04.033 "reset": true, 00:28:04.033 "nvme_admin": false, 00:28:04.033 "nvme_io": false, 00:28:04.033 "nvme_io_md": false, 00:28:04.033 "write_zeroes": true, 00:28:04.033 "zcopy": true, 00:28:04.033 "get_zone_info": false, 00:28:04.033 "zone_management": false, 00:28:04.033 "zone_append": false, 00:28:04.033 "compare": false, 00:28:04.033 "compare_and_write": false, 00:28:04.033 "abort": true, 00:28:04.033 "seek_hole": false, 00:28:04.033 "seek_data": false, 00:28:04.033 "copy": true, 00:28:04.033 "nvme_iov_md": false 00:28:04.033 }, 00:28:04.033 "memory_domains": [ 00:28:04.033 { 00:28:04.033 "dma_device_id": "system", 00:28:04.033 "dma_device_type": 1 00:28:04.033 }, 00:28:04.033 { 00:28:04.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.033 "dma_device_type": 2 00:28:04.033 } 00:28:04.033 ], 00:28:04.033 "driver_specific": {} 00:28:04.033 } 00:28:04.033 ] 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.033 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:04.033 "name": "Existed_Raid", 00:28:04.033 "uuid": "2b4b52aa-9aab-4830-9bef-9ef938387f89", 00:28:04.033 "strip_size_kb": 64, 00:28:04.033 "state": "online", 00:28:04.033 "raid_level": "concat", 00:28:04.033 "superblock": false, 00:28:04.033 "num_base_bdevs": 3, 00:28:04.033 "num_base_bdevs_discovered": 3, 00:28:04.033 "num_base_bdevs_operational": 3, 00:28:04.033 "base_bdevs_list": [ 00:28:04.033 { 00:28:04.033 "name": "BaseBdev1", 00:28:04.033 "uuid": "4d3b011e-b503-49ef-b5f8-89c715cb5e3a", 00:28:04.033 "is_configured": true, 00:28:04.033 "data_offset": 0, 00:28:04.033 "data_size": 65536 00:28:04.033 }, 00:28:04.033 { 00:28:04.033 "name": "BaseBdev2", 00:28:04.033 "uuid": "a0991053-77af-48e2-b370-cea68c119d23", 00:28:04.033 "is_configured": true, 00:28:04.033 "data_offset": 0, 00:28:04.033 "data_size": 65536 00:28:04.033 }, 00:28:04.033 { 00:28:04.033 "name": "BaseBdev3", 00:28:04.033 "uuid": "b4db2fb2-db7b-43db-b673-a2ef345c6c39", 00:28:04.033 "is_configured": true, 00:28:04.033 "data_offset": 0, 00:28:04.033 "data_size": 65536 00:28:04.033 } 00:28:04.033 ] 00:28:04.033 }' 00:28:04.034 18:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:04.034 18:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.291 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:04.291 [2024-12-06 18:26:35.226555] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:04.549 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.549 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:04.549 "name": "Existed_Raid", 00:28:04.549 "aliases": [ 00:28:04.549 "2b4b52aa-9aab-4830-9bef-9ef938387f89" 00:28:04.549 ], 00:28:04.549 "product_name": "Raid Volume", 00:28:04.549 "block_size": 512, 00:28:04.549 "num_blocks": 196608, 00:28:04.549 "uuid": "2b4b52aa-9aab-4830-9bef-9ef938387f89", 00:28:04.549 "assigned_rate_limits": { 00:28:04.549 "rw_ios_per_sec": 0, 00:28:04.549 "rw_mbytes_per_sec": 0, 00:28:04.549 "r_mbytes_per_sec": 0, 00:28:04.549 "w_mbytes_per_sec": 0 00:28:04.549 }, 00:28:04.549 "claimed": false, 00:28:04.549 "zoned": false, 00:28:04.549 "supported_io_types": { 00:28:04.549 "read": true, 00:28:04.549 "write": true, 00:28:04.549 "unmap": true, 00:28:04.550 "flush": true, 00:28:04.550 "reset": true, 00:28:04.550 "nvme_admin": false, 00:28:04.550 "nvme_io": false, 00:28:04.550 "nvme_io_md": false, 00:28:04.550 "write_zeroes": true, 00:28:04.550 "zcopy": false, 00:28:04.550 "get_zone_info": false, 00:28:04.550 "zone_management": false, 00:28:04.550 "zone_append": false, 00:28:04.550 "compare": false, 00:28:04.550 "compare_and_write": false, 00:28:04.550 "abort": false, 00:28:04.550 "seek_hole": false, 00:28:04.550 "seek_data": false, 00:28:04.550 "copy": false, 00:28:04.550 "nvme_iov_md": false 00:28:04.550 }, 00:28:04.550 "memory_domains": [ 00:28:04.550 { 00:28:04.550 "dma_device_id": "system", 00:28:04.550 "dma_device_type": 1 00:28:04.550 }, 00:28:04.550 { 00:28:04.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.550 "dma_device_type": 2 00:28:04.550 }, 00:28:04.550 { 00:28:04.550 "dma_device_id": "system", 00:28:04.550 "dma_device_type": 1 00:28:04.550 }, 00:28:04.550 { 00:28:04.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.550 "dma_device_type": 2 00:28:04.550 }, 00:28:04.550 { 00:28:04.550 "dma_device_id": "system", 00:28:04.550 "dma_device_type": 1 00:28:04.550 }, 00:28:04.550 { 00:28:04.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:04.550 "dma_device_type": 2 00:28:04.550 } 00:28:04.550 ], 00:28:04.550 "driver_specific": { 00:28:04.550 "raid": { 00:28:04.550 "uuid": "2b4b52aa-9aab-4830-9bef-9ef938387f89", 00:28:04.550 "strip_size_kb": 64, 00:28:04.550 "state": "online", 00:28:04.550 "raid_level": "concat", 00:28:04.550 "superblock": false, 00:28:04.550 "num_base_bdevs": 3, 00:28:04.550 "num_base_bdevs_discovered": 3, 00:28:04.550 "num_base_bdevs_operational": 3, 00:28:04.550 "base_bdevs_list": [ 00:28:04.550 { 00:28:04.550 "name": "BaseBdev1", 00:28:04.550 "uuid": "4d3b011e-b503-49ef-b5f8-89c715cb5e3a", 00:28:04.550 "is_configured": true, 00:28:04.550 "data_offset": 0, 00:28:04.550 "data_size": 65536 00:28:04.550 }, 00:28:04.550 { 00:28:04.550 "name": "BaseBdev2", 00:28:04.550 "uuid": "a0991053-77af-48e2-b370-cea68c119d23", 00:28:04.550 "is_configured": true, 00:28:04.550 "data_offset": 0, 00:28:04.550 "data_size": 65536 00:28:04.550 }, 00:28:04.550 { 00:28:04.550 "name": "BaseBdev3", 00:28:04.550 "uuid": "b4db2fb2-db7b-43db-b673-a2ef345c6c39", 00:28:04.550 "is_configured": true, 00:28:04.550 "data_offset": 0, 00:28:04.550 "data_size": 65536 00:28:04.550 } 00:28:04.550 ] 00:28:04.550 } 00:28:04.550 } 00:28:04.550 }' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:04.550 BaseBdev2 00:28:04.550 BaseBdev3' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.550 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.550 [2024-12-06 18:26:35.493884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:04.550 [2024-12-06 18:26:35.493915] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:04.550 [2024-12-06 18:26:35.493969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:04.809 "name": "Existed_Raid", 00:28:04.809 "uuid": "2b4b52aa-9aab-4830-9bef-9ef938387f89", 00:28:04.809 "strip_size_kb": 64, 00:28:04.809 "state": "offline", 00:28:04.809 "raid_level": "concat", 00:28:04.809 "superblock": false, 00:28:04.809 "num_base_bdevs": 3, 00:28:04.809 "num_base_bdevs_discovered": 2, 00:28:04.809 "num_base_bdevs_operational": 2, 00:28:04.809 "base_bdevs_list": [ 00:28:04.809 { 00:28:04.809 "name": null, 00:28:04.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:04.809 "is_configured": false, 00:28:04.809 "data_offset": 0, 00:28:04.809 "data_size": 65536 00:28:04.809 }, 00:28:04.809 { 00:28:04.809 "name": "BaseBdev2", 00:28:04.809 "uuid": "a0991053-77af-48e2-b370-cea68c119d23", 00:28:04.809 "is_configured": true, 00:28:04.809 "data_offset": 0, 00:28:04.809 "data_size": 65536 00:28:04.809 }, 00:28:04.809 { 00:28:04.809 "name": "BaseBdev3", 00:28:04.809 "uuid": "b4db2fb2-db7b-43db-b673-a2ef345c6c39", 00:28:04.809 "is_configured": true, 00:28:04.809 "data_offset": 0, 00:28:04.809 "data_size": 65536 00:28:04.809 } 00:28:04.809 ] 00:28:04.809 }' 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:04.809 18:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.382 [2024-12-06 18:26:36.107429] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.382 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.382 [2024-12-06 18:26:36.255056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:05.382 [2024-12-06 18:26:36.255108] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.641 BaseBdev2 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.641 [ 00:28:05.641 { 00:28:05.641 "name": "BaseBdev2", 00:28:05.641 "aliases": [ 00:28:05.641 "2c701a4b-7adc-441e-8c4a-e66a03dc9010" 00:28:05.641 ], 00:28:05.641 "product_name": "Malloc disk", 00:28:05.641 "block_size": 512, 00:28:05.641 "num_blocks": 65536, 00:28:05.641 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:05.641 "assigned_rate_limits": { 00:28:05.641 "rw_ios_per_sec": 0, 00:28:05.641 "rw_mbytes_per_sec": 0, 00:28:05.641 "r_mbytes_per_sec": 0, 00:28:05.641 "w_mbytes_per_sec": 0 00:28:05.641 }, 00:28:05.641 "claimed": false, 00:28:05.641 "zoned": false, 00:28:05.641 "supported_io_types": { 00:28:05.641 "read": true, 00:28:05.641 "write": true, 00:28:05.641 "unmap": true, 00:28:05.641 "flush": true, 00:28:05.641 "reset": true, 00:28:05.641 "nvme_admin": false, 00:28:05.641 "nvme_io": false, 00:28:05.641 "nvme_io_md": false, 00:28:05.641 "write_zeroes": true, 00:28:05.641 "zcopy": true, 00:28:05.641 "get_zone_info": false, 00:28:05.641 "zone_management": false, 00:28:05.641 "zone_append": false, 00:28:05.641 "compare": false, 00:28:05.641 "compare_and_write": false, 00:28:05.641 "abort": true, 00:28:05.641 "seek_hole": false, 00:28:05.641 "seek_data": false, 00:28:05.641 "copy": true, 00:28:05.641 "nvme_iov_md": false 00:28:05.641 }, 00:28:05.641 "memory_domains": [ 00:28:05.641 { 00:28:05.641 "dma_device_id": "system", 00:28:05.641 "dma_device_type": 1 00:28:05.641 }, 00:28:05.641 { 00:28:05.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.641 "dma_device_type": 2 00:28:05.641 } 00:28:05.641 ], 00:28:05.641 "driver_specific": {} 00:28:05.641 } 00:28:05.641 ] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.641 BaseBdev3 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.641 [ 00:28:05.641 { 00:28:05.641 "name": "BaseBdev3", 00:28:05.641 "aliases": [ 00:28:05.641 "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28" 00:28:05.641 ], 00:28:05.641 "product_name": "Malloc disk", 00:28:05.641 "block_size": 512, 00:28:05.641 "num_blocks": 65536, 00:28:05.641 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:05.641 "assigned_rate_limits": { 00:28:05.641 "rw_ios_per_sec": 0, 00:28:05.641 "rw_mbytes_per_sec": 0, 00:28:05.641 "r_mbytes_per_sec": 0, 00:28:05.641 "w_mbytes_per_sec": 0 00:28:05.641 }, 00:28:05.641 "claimed": false, 00:28:05.641 "zoned": false, 00:28:05.641 "supported_io_types": { 00:28:05.641 "read": true, 00:28:05.641 "write": true, 00:28:05.641 "unmap": true, 00:28:05.641 "flush": true, 00:28:05.641 "reset": true, 00:28:05.641 "nvme_admin": false, 00:28:05.641 "nvme_io": false, 00:28:05.641 "nvme_io_md": false, 00:28:05.641 "write_zeroes": true, 00:28:05.641 "zcopy": true, 00:28:05.641 "get_zone_info": false, 00:28:05.641 "zone_management": false, 00:28:05.641 "zone_append": false, 00:28:05.641 "compare": false, 00:28:05.641 "compare_and_write": false, 00:28:05.641 "abort": true, 00:28:05.641 "seek_hole": false, 00:28:05.641 "seek_data": false, 00:28:05.641 "copy": true, 00:28:05.641 "nvme_iov_md": false 00:28:05.641 }, 00:28:05.641 "memory_domains": [ 00:28:05.641 { 00:28:05.641 "dma_device_id": "system", 00:28:05.641 "dma_device_type": 1 00:28:05.641 }, 00:28:05.641 { 00:28:05.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:05.641 "dma_device_type": 2 00:28:05.641 } 00:28:05.641 ], 00:28:05.641 "driver_specific": {} 00:28:05.641 } 00:28:05.641 ] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.641 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.641 [2024-12-06 18:26:36.587523] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:05.641 [2024-12-06 18:26:36.587675] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:05.641 [2024-12-06 18:26:36.587710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:05.902 [2024-12-06 18:26:36.589843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:05.902 "name": "Existed_Raid", 00:28:05.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.902 "strip_size_kb": 64, 00:28:05.902 "state": "configuring", 00:28:05.902 "raid_level": "concat", 00:28:05.902 "superblock": false, 00:28:05.902 "num_base_bdevs": 3, 00:28:05.902 "num_base_bdevs_discovered": 2, 00:28:05.902 "num_base_bdevs_operational": 3, 00:28:05.902 "base_bdevs_list": [ 00:28:05.902 { 00:28:05.902 "name": "BaseBdev1", 00:28:05.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:05.902 "is_configured": false, 00:28:05.902 "data_offset": 0, 00:28:05.902 "data_size": 0 00:28:05.902 }, 00:28:05.902 { 00:28:05.902 "name": "BaseBdev2", 00:28:05.902 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:05.902 "is_configured": true, 00:28:05.902 "data_offset": 0, 00:28:05.902 "data_size": 65536 00:28:05.902 }, 00:28:05.902 { 00:28:05.902 "name": "BaseBdev3", 00:28:05.902 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:05.902 "is_configured": true, 00:28:05.902 "data_offset": 0, 00:28:05.902 "data_size": 65536 00:28:05.902 } 00:28:05.902 ] 00:28:05.902 }' 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:05.902 18:26:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.160 [2024-12-06 18:26:37.034937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:06.160 "name": "Existed_Raid", 00:28:06.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.160 "strip_size_kb": 64, 00:28:06.160 "state": "configuring", 00:28:06.160 "raid_level": "concat", 00:28:06.160 "superblock": false, 00:28:06.160 "num_base_bdevs": 3, 00:28:06.160 "num_base_bdevs_discovered": 1, 00:28:06.160 "num_base_bdevs_operational": 3, 00:28:06.160 "base_bdevs_list": [ 00:28:06.160 { 00:28:06.160 "name": "BaseBdev1", 00:28:06.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.160 "is_configured": false, 00:28:06.160 "data_offset": 0, 00:28:06.160 "data_size": 0 00:28:06.160 }, 00:28:06.160 { 00:28:06.160 "name": null, 00:28:06.160 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:06.160 "is_configured": false, 00:28:06.160 "data_offset": 0, 00:28:06.160 "data_size": 65536 00:28:06.160 }, 00:28:06.160 { 00:28:06.160 "name": "BaseBdev3", 00:28:06.160 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:06.160 "is_configured": true, 00:28:06.160 "data_offset": 0, 00:28:06.160 "data_size": 65536 00:28:06.160 } 00:28:06.160 ] 00:28:06.160 }' 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:06.160 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.726 [2024-12-06 18:26:37.536843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:06.726 BaseBdev1 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.726 [ 00:28:06.726 { 00:28:06.726 "name": "BaseBdev1", 00:28:06.726 "aliases": [ 00:28:06.726 "25c44129-037e-479e-b123-53b4008a73f8" 00:28:06.726 ], 00:28:06.726 "product_name": "Malloc disk", 00:28:06.726 "block_size": 512, 00:28:06.726 "num_blocks": 65536, 00:28:06.726 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:06.726 "assigned_rate_limits": { 00:28:06.726 "rw_ios_per_sec": 0, 00:28:06.726 "rw_mbytes_per_sec": 0, 00:28:06.726 "r_mbytes_per_sec": 0, 00:28:06.726 "w_mbytes_per_sec": 0 00:28:06.726 }, 00:28:06.726 "claimed": true, 00:28:06.726 "claim_type": "exclusive_write", 00:28:06.726 "zoned": false, 00:28:06.726 "supported_io_types": { 00:28:06.726 "read": true, 00:28:06.726 "write": true, 00:28:06.726 "unmap": true, 00:28:06.726 "flush": true, 00:28:06.726 "reset": true, 00:28:06.726 "nvme_admin": false, 00:28:06.726 "nvme_io": false, 00:28:06.726 "nvme_io_md": false, 00:28:06.726 "write_zeroes": true, 00:28:06.726 "zcopy": true, 00:28:06.726 "get_zone_info": false, 00:28:06.726 "zone_management": false, 00:28:06.726 "zone_append": false, 00:28:06.726 "compare": false, 00:28:06.726 "compare_and_write": false, 00:28:06.726 "abort": true, 00:28:06.726 "seek_hole": false, 00:28:06.726 "seek_data": false, 00:28:06.726 "copy": true, 00:28:06.726 "nvme_iov_md": false 00:28:06.726 }, 00:28:06.726 "memory_domains": [ 00:28:06.726 { 00:28:06.726 "dma_device_id": "system", 00:28:06.726 "dma_device_type": 1 00:28:06.726 }, 00:28:06.726 { 00:28:06.726 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:06.726 "dma_device_type": 2 00:28:06.726 } 00:28:06.726 ], 00:28:06.726 "driver_specific": {} 00:28:06.726 } 00:28:06.726 ] 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:06.726 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:06.727 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:06.727 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.727 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:06.727 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:06.727 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.727 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:06.727 "name": "Existed_Raid", 00:28:06.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:06.727 "strip_size_kb": 64, 00:28:06.727 "state": "configuring", 00:28:06.727 "raid_level": "concat", 00:28:06.727 "superblock": false, 00:28:06.727 "num_base_bdevs": 3, 00:28:06.727 "num_base_bdevs_discovered": 2, 00:28:06.727 "num_base_bdevs_operational": 3, 00:28:06.727 "base_bdevs_list": [ 00:28:06.727 { 00:28:06.727 "name": "BaseBdev1", 00:28:06.727 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:06.727 "is_configured": true, 00:28:06.727 "data_offset": 0, 00:28:06.727 "data_size": 65536 00:28:06.727 }, 00:28:06.727 { 00:28:06.727 "name": null, 00:28:06.727 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:06.727 "is_configured": false, 00:28:06.727 "data_offset": 0, 00:28:06.727 "data_size": 65536 00:28:06.727 }, 00:28:06.727 { 00:28:06.727 "name": "BaseBdev3", 00:28:06.727 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:06.727 "is_configured": true, 00:28:06.727 "data_offset": 0, 00:28:06.727 "data_size": 65536 00:28:06.727 } 00:28:06.727 ] 00:28:06.727 }' 00:28:06.727 18:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:06.727 18:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.293 [2024-12-06 18:26:38.080260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.293 "name": "Existed_Raid", 00:28:07.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.293 "strip_size_kb": 64, 00:28:07.293 "state": "configuring", 00:28:07.293 "raid_level": "concat", 00:28:07.293 "superblock": false, 00:28:07.293 "num_base_bdevs": 3, 00:28:07.293 "num_base_bdevs_discovered": 1, 00:28:07.293 "num_base_bdevs_operational": 3, 00:28:07.293 "base_bdevs_list": [ 00:28:07.293 { 00:28:07.293 "name": "BaseBdev1", 00:28:07.293 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:07.293 "is_configured": true, 00:28:07.293 "data_offset": 0, 00:28:07.293 "data_size": 65536 00:28:07.293 }, 00:28:07.293 { 00:28:07.293 "name": null, 00:28:07.293 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:07.293 "is_configured": false, 00:28:07.293 "data_offset": 0, 00:28:07.293 "data_size": 65536 00:28:07.293 }, 00:28:07.293 { 00:28:07.293 "name": null, 00:28:07.293 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:07.293 "is_configured": false, 00:28:07.293 "data_offset": 0, 00:28:07.293 "data_size": 65536 00:28:07.293 } 00:28:07.293 ] 00:28:07.293 }' 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.293 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.862 [2024-12-06 18:26:38.567552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.862 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:07.862 "name": "Existed_Raid", 00:28:07.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:07.862 "strip_size_kb": 64, 00:28:07.862 "state": "configuring", 00:28:07.862 "raid_level": "concat", 00:28:07.862 "superblock": false, 00:28:07.862 "num_base_bdevs": 3, 00:28:07.862 "num_base_bdevs_discovered": 2, 00:28:07.862 "num_base_bdevs_operational": 3, 00:28:07.862 "base_bdevs_list": [ 00:28:07.862 { 00:28:07.862 "name": "BaseBdev1", 00:28:07.862 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:07.862 "is_configured": true, 00:28:07.863 "data_offset": 0, 00:28:07.863 "data_size": 65536 00:28:07.863 }, 00:28:07.863 { 00:28:07.863 "name": null, 00:28:07.863 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:07.863 "is_configured": false, 00:28:07.863 "data_offset": 0, 00:28:07.863 "data_size": 65536 00:28:07.863 }, 00:28:07.863 { 00:28:07.863 "name": "BaseBdev3", 00:28:07.863 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:07.863 "is_configured": true, 00:28:07.863 "data_offset": 0, 00:28:07.863 "data_size": 65536 00:28:07.863 } 00:28:07.863 ] 00:28:07.863 }' 00:28:07.863 18:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:07.863 18:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.122 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.122 [2024-12-06 18:26:39.058941] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.381 "name": "Existed_Raid", 00:28:08.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.381 "strip_size_kb": 64, 00:28:08.381 "state": "configuring", 00:28:08.381 "raid_level": "concat", 00:28:08.381 "superblock": false, 00:28:08.381 "num_base_bdevs": 3, 00:28:08.381 "num_base_bdevs_discovered": 1, 00:28:08.381 "num_base_bdevs_operational": 3, 00:28:08.381 "base_bdevs_list": [ 00:28:08.381 { 00:28:08.381 "name": null, 00:28:08.381 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:08.381 "is_configured": false, 00:28:08.381 "data_offset": 0, 00:28:08.381 "data_size": 65536 00:28:08.381 }, 00:28:08.381 { 00:28:08.381 "name": null, 00:28:08.381 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:08.381 "is_configured": false, 00:28:08.381 "data_offset": 0, 00:28:08.381 "data_size": 65536 00:28:08.381 }, 00:28:08.381 { 00:28:08.381 "name": "BaseBdev3", 00:28:08.381 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:08.381 "is_configured": true, 00:28:08.381 "data_offset": 0, 00:28:08.381 "data_size": 65536 00:28:08.381 } 00:28:08.381 ] 00:28:08.381 }' 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.381 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.640 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.641 [2024-12-06 18:26:39.579308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:08.641 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:08.900 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:08.900 "name": "Existed_Raid", 00:28:08.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:08.900 "strip_size_kb": 64, 00:28:08.900 "state": "configuring", 00:28:08.900 "raid_level": "concat", 00:28:08.900 "superblock": false, 00:28:08.900 "num_base_bdevs": 3, 00:28:08.900 "num_base_bdevs_discovered": 2, 00:28:08.900 "num_base_bdevs_operational": 3, 00:28:08.900 "base_bdevs_list": [ 00:28:08.900 { 00:28:08.900 "name": null, 00:28:08.900 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:08.900 "is_configured": false, 00:28:08.900 "data_offset": 0, 00:28:08.900 "data_size": 65536 00:28:08.900 }, 00:28:08.900 { 00:28:08.900 "name": "BaseBdev2", 00:28:08.900 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:08.900 "is_configured": true, 00:28:08.900 "data_offset": 0, 00:28:08.900 "data_size": 65536 00:28:08.900 }, 00:28:08.900 { 00:28:08.901 "name": "BaseBdev3", 00:28:08.901 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:08.901 "is_configured": true, 00:28:08.901 "data_offset": 0, 00:28:08.901 "data_size": 65536 00:28:08.901 } 00:28:08.901 ] 00:28:08.901 }' 00:28:08.901 18:26:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:08.901 18:26:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.160 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 25c44129-037e-479e-b123-53b4008a73f8 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.420 [2024-12-06 18:26:40.155941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:09.420 [2024-12-06 18:26:40.155985] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:09.420 [2024-12-06 18:26:40.155997] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:28:09.420 [2024-12-06 18:26:40.156281] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:09.420 [2024-12-06 18:26:40.156432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:09.420 [2024-12-06 18:26:40.156443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:09.420 [2024-12-06 18:26:40.156680] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:09.420 NewBaseBdev 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.420 [ 00:28:09.420 { 00:28:09.420 "name": "NewBaseBdev", 00:28:09.420 "aliases": [ 00:28:09.420 "25c44129-037e-479e-b123-53b4008a73f8" 00:28:09.420 ], 00:28:09.420 "product_name": "Malloc disk", 00:28:09.420 "block_size": 512, 00:28:09.420 "num_blocks": 65536, 00:28:09.420 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:09.420 "assigned_rate_limits": { 00:28:09.420 "rw_ios_per_sec": 0, 00:28:09.420 "rw_mbytes_per_sec": 0, 00:28:09.420 "r_mbytes_per_sec": 0, 00:28:09.420 "w_mbytes_per_sec": 0 00:28:09.420 }, 00:28:09.420 "claimed": true, 00:28:09.420 "claim_type": "exclusive_write", 00:28:09.420 "zoned": false, 00:28:09.420 "supported_io_types": { 00:28:09.420 "read": true, 00:28:09.420 "write": true, 00:28:09.420 "unmap": true, 00:28:09.420 "flush": true, 00:28:09.420 "reset": true, 00:28:09.420 "nvme_admin": false, 00:28:09.420 "nvme_io": false, 00:28:09.420 "nvme_io_md": false, 00:28:09.420 "write_zeroes": true, 00:28:09.420 "zcopy": true, 00:28:09.420 "get_zone_info": false, 00:28:09.420 "zone_management": false, 00:28:09.420 "zone_append": false, 00:28:09.420 "compare": false, 00:28:09.420 "compare_and_write": false, 00:28:09.420 "abort": true, 00:28:09.420 "seek_hole": false, 00:28:09.420 "seek_data": false, 00:28:09.420 "copy": true, 00:28:09.420 "nvme_iov_md": false 00:28:09.420 }, 00:28:09.420 "memory_domains": [ 00:28:09.420 { 00:28:09.420 "dma_device_id": "system", 00:28:09.420 "dma_device_type": 1 00:28:09.420 }, 00:28:09.420 { 00:28:09.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.420 "dma_device_type": 2 00:28:09.420 } 00:28:09.420 ], 00:28:09.420 "driver_specific": {} 00:28:09.420 } 00:28:09.420 ] 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:09.420 "name": "Existed_Raid", 00:28:09.420 "uuid": "4485097f-454b-4a91-b53b-8c223e54a180", 00:28:09.420 "strip_size_kb": 64, 00:28:09.420 "state": "online", 00:28:09.420 "raid_level": "concat", 00:28:09.420 "superblock": false, 00:28:09.420 "num_base_bdevs": 3, 00:28:09.420 "num_base_bdevs_discovered": 3, 00:28:09.420 "num_base_bdevs_operational": 3, 00:28:09.420 "base_bdevs_list": [ 00:28:09.420 { 00:28:09.420 "name": "NewBaseBdev", 00:28:09.420 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:09.420 "is_configured": true, 00:28:09.420 "data_offset": 0, 00:28:09.420 "data_size": 65536 00:28:09.420 }, 00:28:09.420 { 00:28:09.420 "name": "BaseBdev2", 00:28:09.420 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:09.420 "is_configured": true, 00:28:09.420 "data_offset": 0, 00:28:09.420 "data_size": 65536 00:28:09.420 }, 00:28:09.420 { 00:28:09.420 "name": "BaseBdev3", 00:28:09.420 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:09.420 "is_configured": true, 00:28:09.420 "data_offset": 0, 00:28:09.420 "data_size": 65536 00:28:09.420 } 00:28:09.420 ] 00:28:09.420 }' 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:09.420 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.680 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:09.680 [2024-12-06 18:26:40.611668] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:09.940 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.940 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:09.940 "name": "Existed_Raid", 00:28:09.940 "aliases": [ 00:28:09.940 "4485097f-454b-4a91-b53b-8c223e54a180" 00:28:09.940 ], 00:28:09.940 "product_name": "Raid Volume", 00:28:09.940 "block_size": 512, 00:28:09.940 "num_blocks": 196608, 00:28:09.940 "uuid": "4485097f-454b-4a91-b53b-8c223e54a180", 00:28:09.940 "assigned_rate_limits": { 00:28:09.940 "rw_ios_per_sec": 0, 00:28:09.940 "rw_mbytes_per_sec": 0, 00:28:09.940 "r_mbytes_per_sec": 0, 00:28:09.940 "w_mbytes_per_sec": 0 00:28:09.940 }, 00:28:09.940 "claimed": false, 00:28:09.940 "zoned": false, 00:28:09.940 "supported_io_types": { 00:28:09.940 "read": true, 00:28:09.940 "write": true, 00:28:09.940 "unmap": true, 00:28:09.940 "flush": true, 00:28:09.940 "reset": true, 00:28:09.940 "nvme_admin": false, 00:28:09.940 "nvme_io": false, 00:28:09.940 "nvme_io_md": false, 00:28:09.940 "write_zeroes": true, 00:28:09.940 "zcopy": false, 00:28:09.940 "get_zone_info": false, 00:28:09.940 "zone_management": false, 00:28:09.940 "zone_append": false, 00:28:09.940 "compare": false, 00:28:09.940 "compare_and_write": false, 00:28:09.940 "abort": false, 00:28:09.940 "seek_hole": false, 00:28:09.940 "seek_data": false, 00:28:09.940 "copy": false, 00:28:09.940 "nvme_iov_md": false 00:28:09.940 }, 00:28:09.940 "memory_domains": [ 00:28:09.940 { 00:28:09.940 "dma_device_id": "system", 00:28:09.940 "dma_device_type": 1 00:28:09.940 }, 00:28:09.940 { 00:28:09.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.940 "dma_device_type": 2 00:28:09.940 }, 00:28:09.940 { 00:28:09.940 "dma_device_id": "system", 00:28:09.940 "dma_device_type": 1 00:28:09.940 }, 00:28:09.940 { 00:28:09.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.940 "dma_device_type": 2 00:28:09.940 }, 00:28:09.940 { 00:28:09.940 "dma_device_id": "system", 00:28:09.940 "dma_device_type": 1 00:28:09.940 }, 00:28:09.940 { 00:28:09.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:09.941 "dma_device_type": 2 00:28:09.941 } 00:28:09.941 ], 00:28:09.941 "driver_specific": { 00:28:09.941 "raid": { 00:28:09.941 "uuid": "4485097f-454b-4a91-b53b-8c223e54a180", 00:28:09.941 "strip_size_kb": 64, 00:28:09.941 "state": "online", 00:28:09.941 "raid_level": "concat", 00:28:09.941 "superblock": false, 00:28:09.941 "num_base_bdevs": 3, 00:28:09.941 "num_base_bdevs_discovered": 3, 00:28:09.941 "num_base_bdevs_operational": 3, 00:28:09.941 "base_bdevs_list": [ 00:28:09.941 { 00:28:09.941 "name": "NewBaseBdev", 00:28:09.941 "uuid": "25c44129-037e-479e-b123-53b4008a73f8", 00:28:09.941 "is_configured": true, 00:28:09.941 "data_offset": 0, 00:28:09.941 "data_size": 65536 00:28:09.941 }, 00:28:09.941 { 00:28:09.941 "name": "BaseBdev2", 00:28:09.941 "uuid": "2c701a4b-7adc-441e-8c4a-e66a03dc9010", 00:28:09.941 "is_configured": true, 00:28:09.941 "data_offset": 0, 00:28:09.941 "data_size": 65536 00:28:09.941 }, 00:28:09.941 { 00:28:09.941 "name": "BaseBdev3", 00:28:09.941 "uuid": "1af412bc-766d-46e1-8a5a-a6e5d9a5eb28", 00:28:09.941 "is_configured": true, 00:28:09.941 "data_offset": 0, 00:28:09.941 "data_size": 65536 00:28:09.941 } 00:28:09.941 ] 00:28:09.941 } 00:28:09.941 } 00:28:09.941 }' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:09.941 BaseBdev2 00:28:09.941 BaseBdev3' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:09.941 [2024-12-06 18:26:40.875132] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:09.941 [2024-12-06 18:26:40.875181] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:09.941 [2024-12-06 18:26:40.875260] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:09.941 [2024-12-06 18:26:40.875316] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:09.941 [2024-12-06 18:26:40.875332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65352 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65352 ']' 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65352 00:28:09.941 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:28:10.201 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:10.201 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65352 00:28:10.201 killing process with pid 65352 00:28:10.201 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:10.201 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:10.201 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65352' 00:28:10.201 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65352 00:28:10.201 [2024-12-06 18:26:40.928725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:10.201 18:26:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65352 00:28:10.460 [2024-12-06 18:26:41.232868] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:28:11.865 00:28:11.865 real 0m10.679s 00:28:11.865 user 0m16.840s 00:28:11.865 sys 0m2.247s 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:11.865 ************************************ 00:28:11.865 END TEST raid_state_function_test 00:28:11.865 ************************************ 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:11.865 18:26:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:28:11.865 18:26:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:11.865 18:26:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:11.865 18:26:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:11.865 ************************************ 00:28:11.865 START TEST raid_state_function_test_sb 00:28:11.865 ************************************ 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65974 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65974' 00:28:11.865 Process raid pid: 65974 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65974 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 65974 ']' 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:11.865 18:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:11.865 [2024-12-06 18:26:42.575957] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:11.865 [2024-12-06 18:26:42.576082] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:11.865 [2024-12-06 18:26:42.758177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:12.124 [2024-12-06 18:26:42.875757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:12.383 [2024-12-06 18:26:43.084681] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:12.383 [2024-12-06 18:26:43.084724] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:12.643 [2024-12-06 18:26:43.518676] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:12.643 [2024-12-06 18:26:43.518958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:12.643 [2024-12-06 18:26:43.518982] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:12.643 [2024-12-06 18:26:43.518998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:12.643 [2024-12-06 18:26:43.519007] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:12.643 [2024-12-06 18:26:43.519019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:12.643 "name": "Existed_Raid", 00:28:12.643 "uuid": "ad3097ff-b4d7-4226-b3f6-d59d3b30decb", 00:28:12.643 "strip_size_kb": 64, 00:28:12.643 "state": "configuring", 00:28:12.643 "raid_level": "concat", 00:28:12.643 "superblock": true, 00:28:12.643 "num_base_bdevs": 3, 00:28:12.643 "num_base_bdevs_discovered": 0, 00:28:12.643 "num_base_bdevs_operational": 3, 00:28:12.643 "base_bdevs_list": [ 00:28:12.643 { 00:28:12.643 "name": "BaseBdev1", 00:28:12.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.643 "is_configured": false, 00:28:12.643 "data_offset": 0, 00:28:12.643 "data_size": 0 00:28:12.643 }, 00:28:12.643 { 00:28:12.643 "name": "BaseBdev2", 00:28:12.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.643 "is_configured": false, 00:28:12.643 "data_offset": 0, 00:28:12.643 "data_size": 0 00:28:12.643 }, 00:28:12.643 { 00:28:12.643 "name": "BaseBdev3", 00:28:12.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:12.643 "is_configured": false, 00:28:12.643 "data_offset": 0, 00:28:12.643 "data_size": 0 00:28:12.643 } 00:28:12.643 ] 00:28:12.643 }' 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:12.643 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.212 [2024-12-06 18:26:43.938037] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:13.212 [2024-12-06 18:26:43.938277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.212 [2024-12-06 18:26:43.950019] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:13.212 [2024-12-06 18:26:43.950070] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:13.212 [2024-12-06 18:26:43.950080] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:13.212 [2024-12-06 18:26:43.950093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:13.212 [2024-12-06 18:26:43.950101] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:13.212 [2024-12-06 18:26:43.950114] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.212 18:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.212 [2024-12-06 18:26:44.000834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:13.212 BaseBdev1 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.212 [ 00:28:13.212 { 00:28:13.212 "name": "BaseBdev1", 00:28:13.212 "aliases": [ 00:28:13.212 "11fc0176-1659-48eb-91d2-49f25279c903" 00:28:13.212 ], 00:28:13.212 "product_name": "Malloc disk", 00:28:13.212 "block_size": 512, 00:28:13.212 "num_blocks": 65536, 00:28:13.212 "uuid": "11fc0176-1659-48eb-91d2-49f25279c903", 00:28:13.212 "assigned_rate_limits": { 00:28:13.212 "rw_ios_per_sec": 0, 00:28:13.212 "rw_mbytes_per_sec": 0, 00:28:13.212 "r_mbytes_per_sec": 0, 00:28:13.212 "w_mbytes_per_sec": 0 00:28:13.212 }, 00:28:13.212 "claimed": true, 00:28:13.212 "claim_type": "exclusive_write", 00:28:13.212 "zoned": false, 00:28:13.212 "supported_io_types": { 00:28:13.212 "read": true, 00:28:13.212 "write": true, 00:28:13.212 "unmap": true, 00:28:13.212 "flush": true, 00:28:13.212 "reset": true, 00:28:13.212 "nvme_admin": false, 00:28:13.212 "nvme_io": false, 00:28:13.212 "nvme_io_md": false, 00:28:13.212 "write_zeroes": true, 00:28:13.212 "zcopy": true, 00:28:13.212 "get_zone_info": false, 00:28:13.212 "zone_management": false, 00:28:13.212 "zone_append": false, 00:28:13.212 "compare": false, 00:28:13.212 "compare_and_write": false, 00:28:13.212 "abort": true, 00:28:13.212 "seek_hole": false, 00:28:13.212 "seek_data": false, 00:28:13.212 "copy": true, 00:28:13.212 "nvme_iov_md": false 00:28:13.212 }, 00:28:13.212 "memory_domains": [ 00:28:13.212 { 00:28:13.212 "dma_device_id": "system", 00:28:13.212 "dma_device_type": 1 00:28:13.212 }, 00:28:13.212 { 00:28:13.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:13.212 "dma_device_type": 2 00:28:13.212 } 00:28:13.212 ], 00:28:13.212 "driver_specific": {} 00:28:13.212 } 00:28:13.212 ] 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.212 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.213 "name": "Existed_Raid", 00:28:13.213 "uuid": "4684a406-ded7-4896-b9fe-706f5a007be3", 00:28:13.213 "strip_size_kb": 64, 00:28:13.213 "state": "configuring", 00:28:13.213 "raid_level": "concat", 00:28:13.213 "superblock": true, 00:28:13.213 "num_base_bdevs": 3, 00:28:13.213 "num_base_bdevs_discovered": 1, 00:28:13.213 "num_base_bdevs_operational": 3, 00:28:13.213 "base_bdevs_list": [ 00:28:13.213 { 00:28:13.213 "name": "BaseBdev1", 00:28:13.213 "uuid": "11fc0176-1659-48eb-91d2-49f25279c903", 00:28:13.213 "is_configured": true, 00:28:13.213 "data_offset": 2048, 00:28:13.213 "data_size": 63488 00:28:13.213 }, 00:28:13.213 { 00:28:13.213 "name": "BaseBdev2", 00:28:13.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.213 "is_configured": false, 00:28:13.213 "data_offset": 0, 00:28:13.213 "data_size": 0 00:28:13.213 }, 00:28:13.213 { 00:28:13.213 "name": "BaseBdev3", 00:28:13.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.213 "is_configured": false, 00:28:13.213 "data_offset": 0, 00:28:13.213 "data_size": 0 00:28:13.213 } 00:28:13.213 ] 00:28:13.213 }' 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.213 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.782 [2024-12-06 18:26:44.460285] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:13.782 [2024-12-06 18:26:44.460538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.782 [2024-12-06 18:26:44.468365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:13.782 [2024-12-06 18:26:44.470467] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:13.782 [2024-12-06 18:26:44.470515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:13.782 [2024-12-06 18:26:44.470527] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:13.782 [2024-12-06 18:26:44.470540] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:13.782 "name": "Existed_Raid", 00:28:13.782 "uuid": "5c1c9b88-239d-4c4e-a63f-a94b0f848f89", 00:28:13.782 "strip_size_kb": 64, 00:28:13.782 "state": "configuring", 00:28:13.782 "raid_level": "concat", 00:28:13.782 "superblock": true, 00:28:13.782 "num_base_bdevs": 3, 00:28:13.782 "num_base_bdevs_discovered": 1, 00:28:13.782 "num_base_bdevs_operational": 3, 00:28:13.782 "base_bdevs_list": [ 00:28:13.782 { 00:28:13.782 "name": "BaseBdev1", 00:28:13.782 "uuid": "11fc0176-1659-48eb-91d2-49f25279c903", 00:28:13.782 "is_configured": true, 00:28:13.782 "data_offset": 2048, 00:28:13.782 "data_size": 63488 00:28:13.782 }, 00:28:13.782 { 00:28:13.782 "name": "BaseBdev2", 00:28:13.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.782 "is_configured": false, 00:28:13.782 "data_offset": 0, 00:28:13.782 "data_size": 0 00:28:13.782 }, 00:28:13.782 { 00:28:13.782 "name": "BaseBdev3", 00:28:13.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:13.782 "is_configured": false, 00:28:13.782 "data_offset": 0, 00:28:13.782 "data_size": 0 00:28:13.782 } 00:28:13.782 ] 00:28:13.782 }' 00:28:13.782 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:13.783 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.041 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:14.041 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.042 [2024-12-06 18:26:44.976329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:14.042 BaseBdev2 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.042 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.299 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:14.299 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.299 18:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.299 [ 00:28:14.299 { 00:28:14.299 "name": "BaseBdev2", 00:28:14.299 "aliases": [ 00:28:14.299 "e6268043-b9e1-4026-a001-8b0d2352d02c" 00:28:14.299 ], 00:28:14.299 "product_name": "Malloc disk", 00:28:14.299 "block_size": 512, 00:28:14.299 "num_blocks": 65536, 00:28:14.299 "uuid": "e6268043-b9e1-4026-a001-8b0d2352d02c", 00:28:14.299 "assigned_rate_limits": { 00:28:14.299 "rw_ios_per_sec": 0, 00:28:14.300 "rw_mbytes_per_sec": 0, 00:28:14.300 "r_mbytes_per_sec": 0, 00:28:14.300 "w_mbytes_per_sec": 0 00:28:14.300 }, 00:28:14.300 "claimed": true, 00:28:14.300 "claim_type": "exclusive_write", 00:28:14.300 "zoned": false, 00:28:14.300 "supported_io_types": { 00:28:14.300 "read": true, 00:28:14.300 "write": true, 00:28:14.300 "unmap": true, 00:28:14.300 "flush": true, 00:28:14.300 "reset": true, 00:28:14.300 "nvme_admin": false, 00:28:14.300 "nvme_io": false, 00:28:14.300 "nvme_io_md": false, 00:28:14.300 "write_zeroes": true, 00:28:14.300 "zcopy": true, 00:28:14.300 "get_zone_info": false, 00:28:14.300 "zone_management": false, 00:28:14.300 "zone_append": false, 00:28:14.300 "compare": false, 00:28:14.300 "compare_and_write": false, 00:28:14.300 "abort": true, 00:28:14.300 "seek_hole": false, 00:28:14.300 "seek_data": false, 00:28:14.300 "copy": true, 00:28:14.300 "nvme_iov_md": false 00:28:14.300 }, 00:28:14.300 "memory_domains": [ 00:28:14.300 { 00:28:14.300 "dma_device_id": "system", 00:28:14.300 "dma_device_type": 1 00:28:14.300 }, 00:28:14.300 { 00:28:14.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.300 "dma_device_type": 2 00:28:14.300 } 00:28:14.300 ], 00:28:14.300 "driver_specific": {} 00:28:14.300 } 00:28:14.300 ] 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:14.300 "name": "Existed_Raid", 00:28:14.300 "uuid": "5c1c9b88-239d-4c4e-a63f-a94b0f848f89", 00:28:14.300 "strip_size_kb": 64, 00:28:14.300 "state": "configuring", 00:28:14.300 "raid_level": "concat", 00:28:14.300 "superblock": true, 00:28:14.300 "num_base_bdevs": 3, 00:28:14.300 "num_base_bdevs_discovered": 2, 00:28:14.300 "num_base_bdevs_operational": 3, 00:28:14.300 "base_bdevs_list": [ 00:28:14.300 { 00:28:14.300 "name": "BaseBdev1", 00:28:14.300 "uuid": "11fc0176-1659-48eb-91d2-49f25279c903", 00:28:14.300 "is_configured": true, 00:28:14.300 "data_offset": 2048, 00:28:14.300 "data_size": 63488 00:28:14.300 }, 00:28:14.300 { 00:28:14.300 "name": "BaseBdev2", 00:28:14.300 "uuid": "e6268043-b9e1-4026-a001-8b0d2352d02c", 00:28:14.300 "is_configured": true, 00:28:14.300 "data_offset": 2048, 00:28:14.300 "data_size": 63488 00:28:14.300 }, 00:28:14.300 { 00:28:14.300 "name": "BaseBdev3", 00:28:14.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:14.300 "is_configured": false, 00:28:14.300 "data_offset": 0, 00:28:14.300 "data_size": 0 00:28:14.300 } 00:28:14.300 ] 00:28:14.300 }' 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:14.300 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.559 [2024-12-06 18:26:45.494532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:14.559 [2024-12-06 18:26:45.494799] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:14.559 [2024-12-06 18:26:45.494823] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:14.559 [2024-12-06 18:26:45.495104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:14.559 BaseBdev3 00:28:14.559 [2024-12-06 18:26:45.495286] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:14.559 [2024-12-06 18:26:45.495300] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:14.559 [2024-12-06 18:26:45.495441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.559 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.818 [ 00:28:14.818 { 00:28:14.818 "name": "BaseBdev3", 00:28:14.818 "aliases": [ 00:28:14.818 "3cea6a47-16fa-4edc-92ce-3999375f5fab" 00:28:14.818 ], 00:28:14.818 "product_name": "Malloc disk", 00:28:14.818 "block_size": 512, 00:28:14.818 "num_blocks": 65536, 00:28:14.818 "uuid": "3cea6a47-16fa-4edc-92ce-3999375f5fab", 00:28:14.818 "assigned_rate_limits": { 00:28:14.818 "rw_ios_per_sec": 0, 00:28:14.818 "rw_mbytes_per_sec": 0, 00:28:14.818 "r_mbytes_per_sec": 0, 00:28:14.818 "w_mbytes_per_sec": 0 00:28:14.818 }, 00:28:14.818 "claimed": true, 00:28:14.818 "claim_type": "exclusive_write", 00:28:14.818 "zoned": false, 00:28:14.818 "supported_io_types": { 00:28:14.818 "read": true, 00:28:14.818 "write": true, 00:28:14.818 "unmap": true, 00:28:14.818 "flush": true, 00:28:14.818 "reset": true, 00:28:14.818 "nvme_admin": false, 00:28:14.818 "nvme_io": false, 00:28:14.818 "nvme_io_md": false, 00:28:14.818 "write_zeroes": true, 00:28:14.818 "zcopy": true, 00:28:14.818 "get_zone_info": false, 00:28:14.818 "zone_management": false, 00:28:14.818 "zone_append": false, 00:28:14.818 "compare": false, 00:28:14.818 "compare_and_write": false, 00:28:14.818 "abort": true, 00:28:14.818 "seek_hole": false, 00:28:14.818 "seek_data": false, 00:28:14.818 "copy": true, 00:28:14.818 "nvme_iov_md": false 00:28:14.818 }, 00:28:14.818 "memory_domains": [ 00:28:14.818 { 00:28:14.818 "dma_device_id": "system", 00:28:14.818 "dma_device_type": 1 00:28:14.818 }, 00:28:14.818 { 00:28:14.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:14.818 "dma_device_type": 2 00:28:14.818 } 00:28:14.818 ], 00:28:14.818 "driver_specific": {} 00:28:14.818 } 00:28:14.818 ] 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:14.818 "name": "Existed_Raid", 00:28:14.818 "uuid": "5c1c9b88-239d-4c4e-a63f-a94b0f848f89", 00:28:14.818 "strip_size_kb": 64, 00:28:14.818 "state": "online", 00:28:14.818 "raid_level": "concat", 00:28:14.818 "superblock": true, 00:28:14.818 "num_base_bdevs": 3, 00:28:14.818 "num_base_bdevs_discovered": 3, 00:28:14.818 "num_base_bdevs_operational": 3, 00:28:14.818 "base_bdevs_list": [ 00:28:14.818 { 00:28:14.818 "name": "BaseBdev1", 00:28:14.818 "uuid": "11fc0176-1659-48eb-91d2-49f25279c903", 00:28:14.818 "is_configured": true, 00:28:14.818 "data_offset": 2048, 00:28:14.818 "data_size": 63488 00:28:14.818 }, 00:28:14.818 { 00:28:14.818 "name": "BaseBdev2", 00:28:14.818 "uuid": "e6268043-b9e1-4026-a001-8b0d2352d02c", 00:28:14.818 "is_configured": true, 00:28:14.818 "data_offset": 2048, 00:28:14.818 "data_size": 63488 00:28:14.818 }, 00:28:14.818 { 00:28:14.818 "name": "BaseBdev3", 00:28:14.818 "uuid": "3cea6a47-16fa-4edc-92ce-3999375f5fab", 00:28:14.818 "is_configured": true, 00:28:14.818 "data_offset": 2048, 00:28:14.818 "data_size": 63488 00:28:14.818 } 00:28:14.818 ] 00:28:14.818 }' 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:14.818 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.077 18:26:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.077 [2024-12-06 18:26:45.994230] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:15.336 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.336 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:15.336 "name": "Existed_Raid", 00:28:15.336 "aliases": [ 00:28:15.336 "5c1c9b88-239d-4c4e-a63f-a94b0f848f89" 00:28:15.336 ], 00:28:15.336 "product_name": "Raid Volume", 00:28:15.336 "block_size": 512, 00:28:15.336 "num_blocks": 190464, 00:28:15.336 "uuid": "5c1c9b88-239d-4c4e-a63f-a94b0f848f89", 00:28:15.336 "assigned_rate_limits": { 00:28:15.336 "rw_ios_per_sec": 0, 00:28:15.336 "rw_mbytes_per_sec": 0, 00:28:15.336 "r_mbytes_per_sec": 0, 00:28:15.336 "w_mbytes_per_sec": 0 00:28:15.336 }, 00:28:15.336 "claimed": false, 00:28:15.336 "zoned": false, 00:28:15.336 "supported_io_types": { 00:28:15.336 "read": true, 00:28:15.336 "write": true, 00:28:15.336 "unmap": true, 00:28:15.336 "flush": true, 00:28:15.336 "reset": true, 00:28:15.336 "nvme_admin": false, 00:28:15.336 "nvme_io": false, 00:28:15.336 "nvme_io_md": false, 00:28:15.336 "write_zeroes": true, 00:28:15.336 "zcopy": false, 00:28:15.336 "get_zone_info": false, 00:28:15.336 "zone_management": false, 00:28:15.336 "zone_append": false, 00:28:15.336 "compare": false, 00:28:15.336 "compare_and_write": false, 00:28:15.336 "abort": false, 00:28:15.336 "seek_hole": false, 00:28:15.336 "seek_data": false, 00:28:15.336 "copy": false, 00:28:15.337 "nvme_iov_md": false 00:28:15.337 }, 00:28:15.337 "memory_domains": [ 00:28:15.337 { 00:28:15.337 "dma_device_id": "system", 00:28:15.337 "dma_device_type": 1 00:28:15.337 }, 00:28:15.337 { 00:28:15.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.337 "dma_device_type": 2 00:28:15.337 }, 00:28:15.337 { 00:28:15.337 "dma_device_id": "system", 00:28:15.337 "dma_device_type": 1 00:28:15.337 }, 00:28:15.337 { 00:28:15.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.337 "dma_device_type": 2 00:28:15.337 }, 00:28:15.337 { 00:28:15.337 "dma_device_id": "system", 00:28:15.337 "dma_device_type": 1 00:28:15.337 }, 00:28:15.337 { 00:28:15.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:15.337 "dma_device_type": 2 00:28:15.337 } 00:28:15.337 ], 00:28:15.337 "driver_specific": { 00:28:15.337 "raid": { 00:28:15.337 "uuid": "5c1c9b88-239d-4c4e-a63f-a94b0f848f89", 00:28:15.337 "strip_size_kb": 64, 00:28:15.337 "state": "online", 00:28:15.337 "raid_level": "concat", 00:28:15.337 "superblock": true, 00:28:15.337 "num_base_bdevs": 3, 00:28:15.337 "num_base_bdevs_discovered": 3, 00:28:15.337 "num_base_bdevs_operational": 3, 00:28:15.337 "base_bdevs_list": [ 00:28:15.337 { 00:28:15.337 "name": "BaseBdev1", 00:28:15.337 "uuid": "11fc0176-1659-48eb-91d2-49f25279c903", 00:28:15.337 "is_configured": true, 00:28:15.337 "data_offset": 2048, 00:28:15.337 "data_size": 63488 00:28:15.337 }, 00:28:15.337 { 00:28:15.337 "name": "BaseBdev2", 00:28:15.337 "uuid": "e6268043-b9e1-4026-a001-8b0d2352d02c", 00:28:15.337 "is_configured": true, 00:28:15.337 "data_offset": 2048, 00:28:15.337 "data_size": 63488 00:28:15.337 }, 00:28:15.337 { 00:28:15.337 "name": "BaseBdev3", 00:28:15.337 "uuid": "3cea6a47-16fa-4edc-92ce-3999375f5fab", 00:28:15.337 "is_configured": true, 00:28:15.337 "data_offset": 2048, 00:28:15.337 "data_size": 63488 00:28:15.337 } 00:28:15.337 ] 00:28:15.337 } 00:28:15.337 } 00:28:15.337 }' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:15.337 BaseBdev2 00:28:15.337 BaseBdev3' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.337 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.605 [2024-12-06 18:26:46.281821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:15.606 [2024-12-06 18:26:46.281852] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:15.606 [2024-12-06 18:26:46.281906] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:15.606 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:15.606 "name": "Existed_Raid", 00:28:15.606 "uuid": "5c1c9b88-239d-4c4e-a63f-a94b0f848f89", 00:28:15.606 "strip_size_kb": 64, 00:28:15.606 "state": "offline", 00:28:15.606 "raid_level": "concat", 00:28:15.606 "superblock": true, 00:28:15.606 "num_base_bdevs": 3, 00:28:15.606 "num_base_bdevs_discovered": 2, 00:28:15.606 "num_base_bdevs_operational": 2, 00:28:15.606 "base_bdevs_list": [ 00:28:15.606 { 00:28:15.606 "name": null, 00:28:15.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:15.607 "is_configured": false, 00:28:15.607 "data_offset": 0, 00:28:15.607 "data_size": 63488 00:28:15.607 }, 00:28:15.607 { 00:28:15.607 "name": "BaseBdev2", 00:28:15.607 "uuid": "e6268043-b9e1-4026-a001-8b0d2352d02c", 00:28:15.607 "is_configured": true, 00:28:15.607 "data_offset": 2048, 00:28:15.607 "data_size": 63488 00:28:15.607 }, 00:28:15.607 { 00:28:15.607 "name": "BaseBdev3", 00:28:15.607 "uuid": "3cea6a47-16fa-4edc-92ce-3999375f5fab", 00:28:15.607 "is_configured": true, 00:28:15.607 "data_offset": 2048, 00:28:15.607 "data_size": 63488 00:28:15.607 } 00:28:15.607 ] 00:28:15.607 }' 00:28:15.607 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:15.607 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.176 [2024-12-06 18:26:46.878502] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.176 18:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.176 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.176 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:16.176 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:16.176 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:16.176 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.176 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.176 [2024-12-06 18:26:47.031876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:16.176 [2024-12-06 18:26:47.031933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.436 BaseBdev2 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.436 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.436 [ 00:28:16.436 { 00:28:16.436 "name": "BaseBdev2", 00:28:16.436 "aliases": [ 00:28:16.436 "77611ce1-0b13-46e1-ad2a-878d0eb02524" 00:28:16.436 ], 00:28:16.436 "product_name": "Malloc disk", 00:28:16.436 "block_size": 512, 00:28:16.436 "num_blocks": 65536, 00:28:16.436 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:16.436 "assigned_rate_limits": { 00:28:16.436 "rw_ios_per_sec": 0, 00:28:16.436 "rw_mbytes_per_sec": 0, 00:28:16.436 "r_mbytes_per_sec": 0, 00:28:16.436 "w_mbytes_per_sec": 0 00:28:16.436 }, 00:28:16.436 "claimed": false, 00:28:16.436 "zoned": false, 00:28:16.436 "supported_io_types": { 00:28:16.436 "read": true, 00:28:16.436 "write": true, 00:28:16.436 "unmap": true, 00:28:16.436 "flush": true, 00:28:16.436 "reset": true, 00:28:16.436 "nvme_admin": false, 00:28:16.436 "nvme_io": false, 00:28:16.436 "nvme_io_md": false, 00:28:16.436 "write_zeroes": true, 00:28:16.436 "zcopy": true, 00:28:16.436 "get_zone_info": false, 00:28:16.436 "zone_management": false, 00:28:16.436 "zone_append": false, 00:28:16.436 "compare": false, 00:28:16.436 "compare_and_write": false, 00:28:16.436 "abort": true, 00:28:16.436 "seek_hole": false, 00:28:16.436 "seek_data": false, 00:28:16.436 "copy": true, 00:28:16.436 "nvme_iov_md": false 00:28:16.436 }, 00:28:16.436 "memory_domains": [ 00:28:16.436 { 00:28:16.436 "dma_device_id": "system", 00:28:16.436 "dma_device_type": 1 00:28:16.436 }, 00:28:16.436 { 00:28:16.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.436 "dma_device_type": 2 00:28:16.436 } 00:28:16.436 ], 00:28:16.436 "driver_specific": {} 00:28:16.437 } 00:28:16.437 ] 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.437 BaseBdev3 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.437 [ 00:28:16.437 { 00:28:16.437 "name": "BaseBdev3", 00:28:16.437 "aliases": [ 00:28:16.437 "ed6d28af-bb4a-455a-9eb6-c07c88f714c9" 00:28:16.437 ], 00:28:16.437 "product_name": "Malloc disk", 00:28:16.437 "block_size": 512, 00:28:16.437 "num_blocks": 65536, 00:28:16.437 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:16.437 "assigned_rate_limits": { 00:28:16.437 "rw_ios_per_sec": 0, 00:28:16.437 "rw_mbytes_per_sec": 0, 00:28:16.437 "r_mbytes_per_sec": 0, 00:28:16.437 "w_mbytes_per_sec": 0 00:28:16.437 }, 00:28:16.437 "claimed": false, 00:28:16.437 "zoned": false, 00:28:16.437 "supported_io_types": { 00:28:16.437 "read": true, 00:28:16.437 "write": true, 00:28:16.437 "unmap": true, 00:28:16.437 "flush": true, 00:28:16.437 "reset": true, 00:28:16.437 "nvme_admin": false, 00:28:16.437 "nvme_io": false, 00:28:16.437 "nvme_io_md": false, 00:28:16.437 "write_zeroes": true, 00:28:16.437 "zcopy": true, 00:28:16.437 "get_zone_info": false, 00:28:16.437 "zone_management": false, 00:28:16.437 "zone_append": false, 00:28:16.437 "compare": false, 00:28:16.437 "compare_and_write": false, 00:28:16.437 "abort": true, 00:28:16.437 "seek_hole": false, 00:28:16.437 "seek_data": false, 00:28:16.437 "copy": true, 00:28:16.437 "nvme_iov_md": false 00:28:16.437 }, 00:28:16.437 "memory_domains": [ 00:28:16.437 { 00:28:16.437 "dma_device_id": "system", 00:28:16.437 "dma_device_type": 1 00:28:16.437 }, 00:28:16.437 { 00:28:16.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:16.437 "dma_device_type": 2 00:28:16.437 } 00:28:16.437 ], 00:28:16.437 "driver_specific": {} 00:28:16.437 } 00:28:16.437 ] 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.437 [2024-12-06 18:26:47.368185] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:16.437 [2024-12-06 18:26:47.368417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:16.437 [2024-12-06 18:26:47.368455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:16.437 [2024-12-06 18:26:47.370518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.437 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.696 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.696 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.696 "name": "Existed_Raid", 00:28:16.696 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:16.696 "strip_size_kb": 64, 00:28:16.696 "state": "configuring", 00:28:16.696 "raid_level": "concat", 00:28:16.696 "superblock": true, 00:28:16.696 "num_base_bdevs": 3, 00:28:16.696 "num_base_bdevs_discovered": 2, 00:28:16.696 "num_base_bdevs_operational": 3, 00:28:16.696 "base_bdevs_list": [ 00:28:16.696 { 00:28:16.696 "name": "BaseBdev1", 00:28:16.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.696 "is_configured": false, 00:28:16.696 "data_offset": 0, 00:28:16.696 "data_size": 0 00:28:16.696 }, 00:28:16.696 { 00:28:16.696 "name": "BaseBdev2", 00:28:16.696 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:16.696 "is_configured": true, 00:28:16.696 "data_offset": 2048, 00:28:16.696 "data_size": 63488 00:28:16.696 }, 00:28:16.696 { 00:28:16.696 "name": "BaseBdev3", 00:28:16.696 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:16.696 "is_configured": true, 00:28:16.696 "data_offset": 2048, 00:28:16.696 "data_size": 63488 00:28:16.696 } 00:28:16.696 ] 00:28:16.696 }' 00:28:16.696 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.696 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.955 [2024-12-06 18:26:47.819555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:16.955 "name": "Existed_Raid", 00:28:16.955 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:16.955 "strip_size_kb": 64, 00:28:16.955 "state": "configuring", 00:28:16.955 "raid_level": "concat", 00:28:16.955 "superblock": true, 00:28:16.955 "num_base_bdevs": 3, 00:28:16.955 "num_base_bdevs_discovered": 1, 00:28:16.955 "num_base_bdevs_operational": 3, 00:28:16.955 "base_bdevs_list": [ 00:28:16.955 { 00:28:16.955 "name": "BaseBdev1", 00:28:16.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:16.955 "is_configured": false, 00:28:16.955 "data_offset": 0, 00:28:16.955 "data_size": 0 00:28:16.955 }, 00:28:16.955 { 00:28:16.955 "name": null, 00:28:16.955 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:16.955 "is_configured": false, 00:28:16.955 "data_offset": 0, 00:28:16.955 "data_size": 63488 00:28:16.955 }, 00:28:16.955 { 00:28:16.955 "name": "BaseBdev3", 00:28:16.955 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:16.955 "is_configured": true, 00:28:16.955 "data_offset": 2048, 00:28:16.955 "data_size": 63488 00:28:16.955 } 00:28:16.955 ] 00:28:16.955 }' 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:16.955 18:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.524 [2024-12-06 18:26:48.305789] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:17.524 BaseBdev1 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.524 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.525 [ 00:28:17.525 { 00:28:17.525 "name": "BaseBdev1", 00:28:17.525 "aliases": [ 00:28:17.525 "15493192-51c3-45d1-ac83-fc6167f022d2" 00:28:17.525 ], 00:28:17.525 "product_name": "Malloc disk", 00:28:17.525 "block_size": 512, 00:28:17.525 "num_blocks": 65536, 00:28:17.525 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:17.525 "assigned_rate_limits": { 00:28:17.525 "rw_ios_per_sec": 0, 00:28:17.525 "rw_mbytes_per_sec": 0, 00:28:17.525 "r_mbytes_per_sec": 0, 00:28:17.525 "w_mbytes_per_sec": 0 00:28:17.525 }, 00:28:17.525 "claimed": true, 00:28:17.525 "claim_type": "exclusive_write", 00:28:17.525 "zoned": false, 00:28:17.525 "supported_io_types": { 00:28:17.525 "read": true, 00:28:17.525 "write": true, 00:28:17.525 "unmap": true, 00:28:17.525 "flush": true, 00:28:17.525 "reset": true, 00:28:17.525 "nvme_admin": false, 00:28:17.525 "nvme_io": false, 00:28:17.525 "nvme_io_md": false, 00:28:17.525 "write_zeroes": true, 00:28:17.525 "zcopy": true, 00:28:17.525 "get_zone_info": false, 00:28:17.525 "zone_management": false, 00:28:17.525 "zone_append": false, 00:28:17.525 "compare": false, 00:28:17.525 "compare_and_write": false, 00:28:17.525 "abort": true, 00:28:17.525 "seek_hole": false, 00:28:17.525 "seek_data": false, 00:28:17.525 "copy": true, 00:28:17.525 "nvme_iov_md": false 00:28:17.525 }, 00:28:17.525 "memory_domains": [ 00:28:17.525 { 00:28:17.525 "dma_device_id": "system", 00:28:17.525 "dma_device_type": 1 00:28:17.525 }, 00:28:17.525 { 00:28:17.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:17.525 "dma_device_type": 2 00:28:17.525 } 00:28:17.525 ], 00:28:17.525 "driver_specific": {} 00:28:17.525 } 00:28:17.525 ] 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:17.525 "name": "Existed_Raid", 00:28:17.525 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:17.525 "strip_size_kb": 64, 00:28:17.525 "state": "configuring", 00:28:17.525 "raid_level": "concat", 00:28:17.525 "superblock": true, 00:28:17.525 "num_base_bdevs": 3, 00:28:17.525 "num_base_bdevs_discovered": 2, 00:28:17.525 "num_base_bdevs_operational": 3, 00:28:17.525 "base_bdevs_list": [ 00:28:17.525 { 00:28:17.525 "name": "BaseBdev1", 00:28:17.525 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:17.525 "is_configured": true, 00:28:17.525 "data_offset": 2048, 00:28:17.525 "data_size": 63488 00:28:17.525 }, 00:28:17.525 { 00:28:17.525 "name": null, 00:28:17.525 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:17.525 "is_configured": false, 00:28:17.525 "data_offset": 0, 00:28:17.525 "data_size": 63488 00:28:17.525 }, 00:28:17.525 { 00:28:17.525 "name": "BaseBdev3", 00:28:17.525 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:17.525 "is_configured": true, 00:28:17.525 "data_offset": 2048, 00:28:17.525 "data_size": 63488 00:28:17.525 } 00:28:17.525 ] 00:28:17.525 }' 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:17.525 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.093 [2024-12-06 18:26:48.813825] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:18.093 "name": "Existed_Raid", 00:28:18.093 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:18.093 "strip_size_kb": 64, 00:28:18.093 "state": "configuring", 00:28:18.093 "raid_level": "concat", 00:28:18.093 "superblock": true, 00:28:18.093 "num_base_bdevs": 3, 00:28:18.093 "num_base_bdevs_discovered": 1, 00:28:18.093 "num_base_bdevs_operational": 3, 00:28:18.093 "base_bdevs_list": [ 00:28:18.093 { 00:28:18.093 "name": "BaseBdev1", 00:28:18.093 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:18.093 "is_configured": true, 00:28:18.093 "data_offset": 2048, 00:28:18.093 "data_size": 63488 00:28:18.093 }, 00:28:18.093 { 00:28:18.093 "name": null, 00:28:18.093 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:18.093 "is_configured": false, 00:28:18.093 "data_offset": 0, 00:28:18.093 "data_size": 63488 00:28:18.093 }, 00:28:18.093 { 00:28:18.093 "name": null, 00:28:18.093 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:18.093 "is_configured": false, 00:28:18.093 "data_offset": 0, 00:28:18.093 "data_size": 63488 00:28:18.093 } 00:28:18.093 ] 00:28:18.093 }' 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:18.093 18:26:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.353 [2024-12-06 18:26:49.289818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:18.353 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.615 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.615 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:18.615 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.615 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.615 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:18.615 "name": "Existed_Raid", 00:28:18.615 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:18.615 "strip_size_kb": 64, 00:28:18.616 "state": "configuring", 00:28:18.616 "raid_level": "concat", 00:28:18.616 "superblock": true, 00:28:18.616 "num_base_bdevs": 3, 00:28:18.616 "num_base_bdevs_discovered": 2, 00:28:18.616 "num_base_bdevs_operational": 3, 00:28:18.616 "base_bdevs_list": [ 00:28:18.616 { 00:28:18.616 "name": "BaseBdev1", 00:28:18.616 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:18.616 "is_configured": true, 00:28:18.616 "data_offset": 2048, 00:28:18.616 "data_size": 63488 00:28:18.616 }, 00:28:18.616 { 00:28:18.616 "name": null, 00:28:18.616 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:18.616 "is_configured": false, 00:28:18.616 "data_offset": 0, 00:28:18.616 "data_size": 63488 00:28:18.616 }, 00:28:18.616 { 00:28:18.616 "name": "BaseBdev3", 00:28:18.616 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:18.616 "is_configured": true, 00:28:18.616 "data_offset": 2048, 00:28:18.616 "data_size": 63488 00:28:18.616 } 00:28:18.616 ] 00:28:18.616 }' 00:28:18.616 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:18.616 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.873 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:18.873 [2024-12-06 18:26:49.797857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.132 "name": "Existed_Raid", 00:28:19.132 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:19.132 "strip_size_kb": 64, 00:28:19.132 "state": "configuring", 00:28:19.132 "raid_level": "concat", 00:28:19.132 "superblock": true, 00:28:19.132 "num_base_bdevs": 3, 00:28:19.132 "num_base_bdevs_discovered": 1, 00:28:19.132 "num_base_bdevs_operational": 3, 00:28:19.132 "base_bdevs_list": [ 00:28:19.132 { 00:28:19.132 "name": null, 00:28:19.132 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:19.132 "is_configured": false, 00:28:19.132 "data_offset": 0, 00:28:19.132 "data_size": 63488 00:28:19.132 }, 00:28:19.132 { 00:28:19.132 "name": null, 00:28:19.132 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:19.132 "is_configured": false, 00:28:19.132 "data_offset": 0, 00:28:19.132 "data_size": 63488 00:28:19.132 }, 00:28:19.132 { 00:28:19.132 "name": "BaseBdev3", 00:28:19.132 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:19.132 "is_configured": true, 00:28:19.132 "data_offset": 2048, 00:28:19.132 "data_size": 63488 00:28:19.132 } 00:28:19.132 ] 00:28:19.132 }' 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.132 18:26:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.401 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.401 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.401 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.401 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:19.401 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.671 [2024-12-06 18:26:50.367849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:19.671 "name": "Existed_Raid", 00:28:19.671 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:19.671 "strip_size_kb": 64, 00:28:19.671 "state": "configuring", 00:28:19.671 "raid_level": "concat", 00:28:19.671 "superblock": true, 00:28:19.671 "num_base_bdevs": 3, 00:28:19.671 "num_base_bdevs_discovered": 2, 00:28:19.671 "num_base_bdevs_operational": 3, 00:28:19.671 "base_bdevs_list": [ 00:28:19.671 { 00:28:19.671 "name": null, 00:28:19.671 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:19.671 "is_configured": false, 00:28:19.671 "data_offset": 0, 00:28:19.671 "data_size": 63488 00:28:19.671 }, 00:28:19.671 { 00:28:19.671 "name": "BaseBdev2", 00:28:19.671 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:19.671 "is_configured": true, 00:28:19.671 "data_offset": 2048, 00:28:19.671 "data_size": 63488 00:28:19.671 }, 00:28:19.671 { 00:28:19.671 "name": "BaseBdev3", 00:28:19.671 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:19.671 "is_configured": true, 00:28:19.671 "data_offset": 2048, 00:28:19.671 "data_size": 63488 00:28:19.671 } 00:28:19.671 ] 00:28:19.671 }' 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:19.671 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 15493192-51c3-45d1-ac83-fc6167f022d2 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.930 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.190 [2024-12-06 18:26:50.892597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:20.190 [2024-12-06 18:26:50.892818] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:20.190 [2024-12-06 18:26:50.892838] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:20.190 [2024-12-06 18:26:50.893099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:20.190 [2024-12-06 18:26:50.893267] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:20.190 [2024-12-06 18:26:50.893279] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:20.190 [2024-12-06 18:26:50.893418] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:20.190 NewBaseBdev 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.190 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.190 [ 00:28:20.190 { 00:28:20.190 "name": "NewBaseBdev", 00:28:20.190 "aliases": [ 00:28:20.190 "15493192-51c3-45d1-ac83-fc6167f022d2" 00:28:20.190 ], 00:28:20.190 "product_name": "Malloc disk", 00:28:20.190 "block_size": 512, 00:28:20.190 "num_blocks": 65536, 00:28:20.190 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:20.190 "assigned_rate_limits": { 00:28:20.190 "rw_ios_per_sec": 0, 00:28:20.190 "rw_mbytes_per_sec": 0, 00:28:20.190 "r_mbytes_per_sec": 0, 00:28:20.190 "w_mbytes_per_sec": 0 00:28:20.190 }, 00:28:20.190 "claimed": true, 00:28:20.191 "claim_type": "exclusive_write", 00:28:20.191 "zoned": false, 00:28:20.191 "supported_io_types": { 00:28:20.191 "read": true, 00:28:20.191 "write": true, 00:28:20.191 "unmap": true, 00:28:20.191 "flush": true, 00:28:20.191 "reset": true, 00:28:20.191 "nvme_admin": false, 00:28:20.191 "nvme_io": false, 00:28:20.191 "nvme_io_md": false, 00:28:20.191 "write_zeroes": true, 00:28:20.191 "zcopy": true, 00:28:20.191 "get_zone_info": false, 00:28:20.191 "zone_management": false, 00:28:20.191 "zone_append": false, 00:28:20.191 "compare": false, 00:28:20.191 "compare_and_write": false, 00:28:20.191 "abort": true, 00:28:20.191 "seek_hole": false, 00:28:20.191 "seek_data": false, 00:28:20.191 "copy": true, 00:28:20.191 "nvme_iov_md": false 00:28:20.191 }, 00:28:20.191 "memory_domains": [ 00:28:20.191 { 00:28:20.191 "dma_device_id": "system", 00:28:20.191 "dma_device_type": 1 00:28:20.191 }, 00:28:20.191 { 00:28:20.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.191 "dma_device_type": 2 00:28:20.191 } 00:28:20.191 ], 00:28:20.191 "driver_specific": {} 00:28:20.191 } 00:28:20.191 ] 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:20.191 "name": "Existed_Raid", 00:28:20.191 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:20.191 "strip_size_kb": 64, 00:28:20.191 "state": "online", 00:28:20.191 "raid_level": "concat", 00:28:20.191 "superblock": true, 00:28:20.191 "num_base_bdevs": 3, 00:28:20.191 "num_base_bdevs_discovered": 3, 00:28:20.191 "num_base_bdevs_operational": 3, 00:28:20.191 "base_bdevs_list": [ 00:28:20.191 { 00:28:20.191 "name": "NewBaseBdev", 00:28:20.191 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:20.191 "is_configured": true, 00:28:20.191 "data_offset": 2048, 00:28:20.191 "data_size": 63488 00:28:20.191 }, 00:28:20.191 { 00:28:20.191 "name": "BaseBdev2", 00:28:20.191 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:20.191 "is_configured": true, 00:28:20.191 "data_offset": 2048, 00:28:20.191 "data_size": 63488 00:28:20.191 }, 00:28:20.191 { 00:28:20.191 "name": "BaseBdev3", 00:28:20.191 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:20.191 "is_configured": true, 00:28:20.191 "data_offset": 2048, 00:28:20.191 "data_size": 63488 00:28:20.191 } 00:28:20.191 ] 00:28:20.191 }' 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:20.191 18:26:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.450 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.450 [2024-12-06 18:26:51.388304] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:20.709 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.709 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:20.709 "name": "Existed_Raid", 00:28:20.709 "aliases": [ 00:28:20.709 "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0" 00:28:20.709 ], 00:28:20.709 "product_name": "Raid Volume", 00:28:20.709 "block_size": 512, 00:28:20.709 "num_blocks": 190464, 00:28:20.709 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:20.709 "assigned_rate_limits": { 00:28:20.709 "rw_ios_per_sec": 0, 00:28:20.709 "rw_mbytes_per_sec": 0, 00:28:20.709 "r_mbytes_per_sec": 0, 00:28:20.709 "w_mbytes_per_sec": 0 00:28:20.709 }, 00:28:20.709 "claimed": false, 00:28:20.709 "zoned": false, 00:28:20.709 "supported_io_types": { 00:28:20.709 "read": true, 00:28:20.709 "write": true, 00:28:20.709 "unmap": true, 00:28:20.709 "flush": true, 00:28:20.709 "reset": true, 00:28:20.709 "nvme_admin": false, 00:28:20.709 "nvme_io": false, 00:28:20.709 "nvme_io_md": false, 00:28:20.709 "write_zeroes": true, 00:28:20.709 "zcopy": false, 00:28:20.709 "get_zone_info": false, 00:28:20.709 "zone_management": false, 00:28:20.709 "zone_append": false, 00:28:20.709 "compare": false, 00:28:20.709 "compare_and_write": false, 00:28:20.709 "abort": false, 00:28:20.709 "seek_hole": false, 00:28:20.709 "seek_data": false, 00:28:20.709 "copy": false, 00:28:20.709 "nvme_iov_md": false 00:28:20.709 }, 00:28:20.709 "memory_domains": [ 00:28:20.709 { 00:28:20.709 "dma_device_id": "system", 00:28:20.709 "dma_device_type": 1 00:28:20.709 }, 00:28:20.709 { 00:28:20.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.709 "dma_device_type": 2 00:28:20.709 }, 00:28:20.709 { 00:28:20.710 "dma_device_id": "system", 00:28:20.710 "dma_device_type": 1 00:28:20.710 }, 00:28:20.710 { 00:28:20.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.710 "dma_device_type": 2 00:28:20.710 }, 00:28:20.710 { 00:28:20.710 "dma_device_id": "system", 00:28:20.710 "dma_device_type": 1 00:28:20.710 }, 00:28:20.710 { 00:28:20.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:20.710 "dma_device_type": 2 00:28:20.710 } 00:28:20.710 ], 00:28:20.710 "driver_specific": { 00:28:20.710 "raid": { 00:28:20.710 "uuid": "314c977f-9f1f-49d9-ae29-c6cc2d3a1ad0", 00:28:20.710 "strip_size_kb": 64, 00:28:20.710 "state": "online", 00:28:20.710 "raid_level": "concat", 00:28:20.710 "superblock": true, 00:28:20.710 "num_base_bdevs": 3, 00:28:20.710 "num_base_bdevs_discovered": 3, 00:28:20.710 "num_base_bdevs_operational": 3, 00:28:20.710 "base_bdevs_list": [ 00:28:20.710 { 00:28:20.710 "name": "NewBaseBdev", 00:28:20.710 "uuid": "15493192-51c3-45d1-ac83-fc6167f022d2", 00:28:20.710 "is_configured": true, 00:28:20.710 "data_offset": 2048, 00:28:20.710 "data_size": 63488 00:28:20.710 }, 00:28:20.710 { 00:28:20.710 "name": "BaseBdev2", 00:28:20.710 "uuid": "77611ce1-0b13-46e1-ad2a-878d0eb02524", 00:28:20.710 "is_configured": true, 00:28:20.710 "data_offset": 2048, 00:28:20.710 "data_size": 63488 00:28:20.710 }, 00:28:20.710 { 00:28:20.710 "name": "BaseBdev3", 00:28:20.710 "uuid": "ed6d28af-bb4a-455a-9eb6-c07c88f714c9", 00:28:20.710 "is_configured": true, 00:28:20.710 "data_offset": 2048, 00:28:20.710 "data_size": 63488 00:28:20.710 } 00:28:20.710 ] 00:28:20.710 } 00:28:20.710 } 00:28:20.710 }' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:20.710 BaseBdev2 00:28:20.710 BaseBdev3' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:20.710 [2024-12-06 18:26:51.639627] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:20.710 [2024-12-06 18:26:51.639661] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:20.710 [2024-12-06 18:26:51.639735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:20.710 [2024-12-06 18:26:51.639793] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:20.710 [2024-12-06 18:26:51.639808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65974 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 65974 ']' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 65974 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:20.710 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65974 00:28:20.990 killing process with pid 65974 00:28:20.990 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:20.990 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:20.990 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65974' 00:28:20.990 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 65974 00:28:20.990 [2024-12-06 18:26:51.689616] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:20.990 18:26:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 65974 00:28:21.250 [2024-12-06 18:26:51.995414] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:22.621 ************************************ 00:28:22.621 END TEST raid_state_function_test_sb 00:28:22.621 ************************************ 00:28:22.621 18:26:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:28:22.621 00:28:22.621 real 0m10.675s 00:28:22.621 user 0m16.877s 00:28:22.621 sys 0m2.150s 00:28:22.621 18:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.621 18:26:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:22.621 18:26:53 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:28:22.621 18:26:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:22.621 18:26:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:22.621 18:26:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:22.621 ************************************ 00:28:22.621 START TEST raid_superblock_test 00:28:22.621 ************************************ 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66596 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66596 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66596 ']' 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.621 18:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:22.621 [2024-12-06 18:26:53.321120] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:22.621 [2024-12-06 18:26:53.321262] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66596 ] 00:28:22.621 [2024-12-06 18:26:53.503078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.879 [2024-12-06 18:26:53.619808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.136 [2024-12-06 18:26:53.830730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:23.136 [2024-12-06 18:26:53.830807] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.393 malloc1 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.393 [2024-12-06 18:26:54.225325] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:23.393 [2024-12-06 18:26:54.225399] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.393 [2024-12-06 18:26:54.225425] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:23.393 [2024-12-06 18:26:54.225453] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.393 [2024-12-06 18:26:54.228063] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.393 [2024-12-06 18:26:54.228108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:23.393 pt1 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.393 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.394 malloc2 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.394 [2024-12-06 18:26:54.283784] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:23.394 [2024-12-06 18:26:54.283857] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.394 [2024-12-06 18:26:54.283905] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:23.394 [2024-12-06 18:26:54.283918] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.394 [2024-12-06 18:26:54.286556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.394 [2024-12-06 18:26:54.286621] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:23.394 pt2 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.394 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.652 malloc3 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.652 [2024-12-06 18:26:54.356710] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:23.652 [2024-12-06 18:26:54.356781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:23.652 [2024-12-06 18:26:54.356807] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:23.652 [2024-12-06 18:26:54.356820] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:23.652 [2024-12-06 18:26:54.359511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:23.652 [2024-12-06 18:26:54.359556] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:23.652 pt3 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.652 [2024-12-06 18:26:54.368740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:23.652 [2024-12-06 18:26:54.370884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:23.652 [2024-12-06 18:26:54.370977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:23.652 [2024-12-06 18:26:54.371141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:23.652 [2024-12-06 18:26:54.371176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:23.652 [2024-12-06 18:26:54.371468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:23.652 [2024-12-06 18:26:54.371628] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:23.652 [2024-12-06 18:26:54.371638] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:23.652 [2024-12-06 18:26:54.371796] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:23.652 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:23.653 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:23.653 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.653 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.653 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:23.653 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.653 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:23.653 "name": "raid_bdev1", 00:28:23.653 "uuid": "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7", 00:28:23.653 "strip_size_kb": 64, 00:28:23.653 "state": "online", 00:28:23.653 "raid_level": "concat", 00:28:23.653 "superblock": true, 00:28:23.653 "num_base_bdevs": 3, 00:28:23.653 "num_base_bdevs_discovered": 3, 00:28:23.653 "num_base_bdevs_operational": 3, 00:28:23.653 "base_bdevs_list": [ 00:28:23.653 { 00:28:23.653 "name": "pt1", 00:28:23.653 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:23.653 "is_configured": true, 00:28:23.653 "data_offset": 2048, 00:28:23.653 "data_size": 63488 00:28:23.653 }, 00:28:23.653 { 00:28:23.653 "name": "pt2", 00:28:23.653 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:23.653 "is_configured": true, 00:28:23.653 "data_offset": 2048, 00:28:23.653 "data_size": 63488 00:28:23.653 }, 00:28:23.653 { 00:28:23.653 "name": "pt3", 00:28:23.653 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:23.653 "is_configured": true, 00:28:23.653 "data_offset": 2048, 00:28:23.653 "data_size": 63488 00:28:23.653 } 00:28:23.653 ] 00:28:23.653 }' 00:28:23.653 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:23.653 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.911 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:23.911 [2024-12-06 18:26:54.828613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:24.170 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.170 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:24.170 "name": "raid_bdev1", 00:28:24.170 "aliases": [ 00:28:24.170 "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7" 00:28:24.170 ], 00:28:24.170 "product_name": "Raid Volume", 00:28:24.170 "block_size": 512, 00:28:24.170 "num_blocks": 190464, 00:28:24.170 "uuid": "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7", 00:28:24.170 "assigned_rate_limits": { 00:28:24.170 "rw_ios_per_sec": 0, 00:28:24.170 "rw_mbytes_per_sec": 0, 00:28:24.170 "r_mbytes_per_sec": 0, 00:28:24.170 "w_mbytes_per_sec": 0 00:28:24.170 }, 00:28:24.170 "claimed": false, 00:28:24.170 "zoned": false, 00:28:24.170 "supported_io_types": { 00:28:24.170 "read": true, 00:28:24.170 "write": true, 00:28:24.170 "unmap": true, 00:28:24.170 "flush": true, 00:28:24.170 "reset": true, 00:28:24.170 "nvme_admin": false, 00:28:24.170 "nvme_io": false, 00:28:24.170 "nvme_io_md": false, 00:28:24.170 "write_zeroes": true, 00:28:24.170 "zcopy": false, 00:28:24.170 "get_zone_info": false, 00:28:24.170 "zone_management": false, 00:28:24.170 "zone_append": false, 00:28:24.170 "compare": false, 00:28:24.170 "compare_and_write": false, 00:28:24.170 "abort": false, 00:28:24.170 "seek_hole": false, 00:28:24.170 "seek_data": false, 00:28:24.170 "copy": false, 00:28:24.170 "nvme_iov_md": false 00:28:24.170 }, 00:28:24.170 "memory_domains": [ 00:28:24.170 { 00:28:24.170 "dma_device_id": "system", 00:28:24.170 "dma_device_type": 1 00:28:24.170 }, 00:28:24.170 { 00:28:24.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:24.170 "dma_device_type": 2 00:28:24.170 }, 00:28:24.170 { 00:28:24.170 "dma_device_id": "system", 00:28:24.170 "dma_device_type": 1 00:28:24.170 }, 00:28:24.170 { 00:28:24.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:24.170 "dma_device_type": 2 00:28:24.170 }, 00:28:24.170 { 00:28:24.170 "dma_device_id": "system", 00:28:24.170 "dma_device_type": 1 00:28:24.170 }, 00:28:24.170 { 00:28:24.170 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:24.170 "dma_device_type": 2 00:28:24.170 } 00:28:24.170 ], 00:28:24.170 "driver_specific": { 00:28:24.170 "raid": { 00:28:24.170 "uuid": "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7", 00:28:24.170 "strip_size_kb": 64, 00:28:24.170 "state": "online", 00:28:24.170 "raid_level": "concat", 00:28:24.170 "superblock": true, 00:28:24.170 "num_base_bdevs": 3, 00:28:24.170 "num_base_bdevs_discovered": 3, 00:28:24.170 "num_base_bdevs_operational": 3, 00:28:24.170 "base_bdevs_list": [ 00:28:24.170 { 00:28:24.170 "name": "pt1", 00:28:24.170 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:24.170 "is_configured": true, 00:28:24.170 "data_offset": 2048, 00:28:24.170 "data_size": 63488 00:28:24.170 }, 00:28:24.170 { 00:28:24.170 "name": "pt2", 00:28:24.170 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:24.170 "is_configured": true, 00:28:24.170 "data_offset": 2048, 00:28:24.170 "data_size": 63488 00:28:24.170 }, 00:28:24.170 { 00:28:24.170 "name": "pt3", 00:28:24.170 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:24.170 "is_configured": true, 00:28:24.170 "data_offset": 2048, 00:28:24.170 "data_size": 63488 00:28:24.170 } 00:28:24.170 ] 00:28:24.170 } 00:28:24.170 } 00:28:24.170 }' 00:28:24.170 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:24.170 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:24.170 pt2 00:28:24.170 pt3' 00:28:24.170 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:24.171 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:24.171 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:24.171 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:24.171 18:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:24.171 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.171 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.171 18:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.171 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 [2024-12-06 18:26:55.124490] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7 ']' 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 [2024-12-06 18:26:55.168162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:24.428 [2024-12-06 18:26:55.168194] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:24.428 [2024-12-06 18:26:55.168271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:24.428 [2024-12-06 18:26:55.168336] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:24.428 [2024-12-06 18:26:55.168348] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 [2024-12-06 18:26:55.324019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:28:24.428 [2024-12-06 18:26:55.326171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:28:24.428 [2024-12-06 18:26:55.326231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:28:24.428 [2024-12-06 18:26:55.326284] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:28:24.428 [2024-12-06 18:26:55.326343] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:28:24.428 [2024-12-06 18:26:55.326364] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:28:24.428 [2024-12-06 18:26:55.326386] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:24.428 [2024-12-06 18:26:55.326397] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:28:24.428 request: 00:28:24.428 { 00:28:24.428 "name": "raid_bdev1", 00:28:24.428 "raid_level": "concat", 00:28:24.428 "base_bdevs": [ 00:28:24.428 "malloc1", 00:28:24.428 "malloc2", 00:28:24.428 "malloc3" 00:28:24.428 ], 00:28:24.428 "strip_size_kb": 64, 00:28:24.428 "superblock": false, 00:28:24.428 "method": "bdev_raid_create", 00:28:24.428 "req_id": 1 00:28:24.428 } 00:28:24.428 Got JSON-RPC error response 00:28:24.428 response: 00:28:24.428 { 00:28:24.428 "code": -17, 00:28:24.428 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:28:24.428 } 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.428 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.687 [2024-12-06 18:26:55.391821] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:24.687 [2024-12-06 18:26:55.391880] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.687 [2024-12-06 18:26:55.391901] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:28:24.687 [2024-12-06 18:26:55.391914] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.687 [2024-12-06 18:26:55.394407] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.687 [2024-12-06 18:26:55.394452] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:24.687 [2024-12-06 18:26:55.394526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:28:24.687 [2024-12-06 18:26:55.394582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:24.687 pt1 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:24.687 "name": "raid_bdev1", 00:28:24.687 "uuid": "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7", 00:28:24.687 "strip_size_kb": 64, 00:28:24.687 "state": "configuring", 00:28:24.687 "raid_level": "concat", 00:28:24.687 "superblock": true, 00:28:24.687 "num_base_bdevs": 3, 00:28:24.687 "num_base_bdevs_discovered": 1, 00:28:24.687 "num_base_bdevs_operational": 3, 00:28:24.687 "base_bdevs_list": [ 00:28:24.687 { 00:28:24.687 "name": "pt1", 00:28:24.687 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:24.687 "is_configured": true, 00:28:24.687 "data_offset": 2048, 00:28:24.687 "data_size": 63488 00:28:24.687 }, 00:28:24.687 { 00:28:24.687 "name": null, 00:28:24.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:24.687 "is_configured": false, 00:28:24.687 "data_offset": 2048, 00:28:24.687 "data_size": 63488 00:28:24.687 }, 00:28:24.687 { 00:28:24.687 "name": null, 00:28:24.687 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:24.687 "is_configured": false, 00:28:24.687 "data_offset": 2048, 00:28:24.687 "data_size": 63488 00:28:24.687 } 00:28:24.687 ] 00:28:24.687 }' 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:24.687 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.945 [2024-12-06 18:26:55.859324] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:24.945 [2024-12-06 18:26:55.859411] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:24.945 [2024-12-06 18:26:55.859442] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:28:24.945 [2024-12-06 18:26:55.859455] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:24.945 [2024-12-06 18:26:55.859905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:24.945 [2024-12-06 18:26:55.859936] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:24.945 [2024-12-06 18:26:55.860026] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:24.945 [2024-12-06 18:26:55.860058] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:24.945 pt2 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:24.945 [2024-12-06 18:26:55.871319] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:24.945 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.204 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.204 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.204 "name": "raid_bdev1", 00:28:25.204 "uuid": "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7", 00:28:25.204 "strip_size_kb": 64, 00:28:25.204 "state": "configuring", 00:28:25.204 "raid_level": "concat", 00:28:25.204 "superblock": true, 00:28:25.204 "num_base_bdevs": 3, 00:28:25.204 "num_base_bdevs_discovered": 1, 00:28:25.204 "num_base_bdevs_operational": 3, 00:28:25.204 "base_bdevs_list": [ 00:28:25.204 { 00:28:25.204 "name": "pt1", 00:28:25.204 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:25.204 "is_configured": true, 00:28:25.204 "data_offset": 2048, 00:28:25.204 "data_size": 63488 00:28:25.204 }, 00:28:25.204 { 00:28:25.204 "name": null, 00:28:25.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:25.204 "is_configured": false, 00:28:25.204 "data_offset": 0, 00:28:25.204 "data_size": 63488 00:28:25.204 }, 00:28:25.204 { 00:28:25.204 "name": null, 00:28:25.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:25.204 "is_configured": false, 00:28:25.204 "data_offset": 2048, 00:28:25.204 "data_size": 63488 00:28:25.204 } 00:28:25.204 ] 00:28:25.204 }' 00:28:25.204 18:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.204 18:26:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.463 [2024-12-06 18:26:56.327485] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:25.463 [2024-12-06 18:26:56.327570] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.463 [2024-12-06 18:26:56.327594] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:28:25.463 [2024-12-06 18:26:56.327609] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.463 [2024-12-06 18:26:56.328101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.463 [2024-12-06 18:26:56.328138] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:25.463 [2024-12-06 18:26:56.328246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:28:25.463 [2024-12-06 18:26:56.328277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:25.463 pt2 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.463 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.463 [2024-12-06 18:26:56.335475] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:25.463 [2024-12-06 18:26:56.335546] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:25.463 [2024-12-06 18:26:56.335566] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:28:25.463 [2024-12-06 18:26:56.335581] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:25.463 [2024-12-06 18:26:56.336029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:25.463 [2024-12-06 18:26:56.336073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:25.463 [2024-12-06 18:26:56.336164] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:28:25.463 [2024-12-06 18:26:56.336195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:25.463 [2024-12-06 18:26:56.336318] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:25.464 [2024-12-06 18:26:56.336332] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:25.464 [2024-12-06 18:26:56.336608] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:28:25.464 [2024-12-06 18:26:56.336764] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:25.464 [2024-12-06 18:26:56.336778] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:28:25.464 [2024-12-06 18:26:56.336915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:25.464 pt3 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:25.464 "name": "raid_bdev1", 00:28:25.464 "uuid": "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7", 00:28:25.464 "strip_size_kb": 64, 00:28:25.464 "state": "online", 00:28:25.464 "raid_level": "concat", 00:28:25.464 "superblock": true, 00:28:25.464 "num_base_bdevs": 3, 00:28:25.464 "num_base_bdevs_discovered": 3, 00:28:25.464 "num_base_bdevs_operational": 3, 00:28:25.464 "base_bdevs_list": [ 00:28:25.464 { 00:28:25.464 "name": "pt1", 00:28:25.464 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:25.464 "is_configured": true, 00:28:25.464 "data_offset": 2048, 00:28:25.464 "data_size": 63488 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "name": "pt2", 00:28:25.464 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:25.464 "is_configured": true, 00:28:25.464 "data_offset": 2048, 00:28:25.464 "data_size": 63488 00:28:25.464 }, 00:28:25.464 { 00:28:25.464 "name": "pt3", 00:28:25.464 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:25.464 "is_configured": true, 00:28:25.464 "data_offset": 2048, 00:28:25.464 "data_size": 63488 00:28:25.464 } 00:28:25.464 ] 00:28:25.464 }' 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:25.464 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.032 [2024-12-06 18:26:56.827108] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:26.032 "name": "raid_bdev1", 00:28:26.032 "aliases": [ 00:28:26.032 "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7" 00:28:26.032 ], 00:28:26.032 "product_name": "Raid Volume", 00:28:26.032 "block_size": 512, 00:28:26.032 "num_blocks": 190464, 00:28:26.032 "uuid": "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7", 00:28:26.032 "assigned_rate_limits": { 00:28:26.032 "rw_ios_per_sec": 0, 00:28:26.032 "rw_mbytes_per_sec": 0, 00:28:26.032 "r_mbytes_per_sec": 0, 00:28:26.032 "w_mbytes_per_sec": 0 00:28:26.032 }, 00:28:26.032 "claimed": false, 00:28:26.032 "zoned": false, 00:28:26.032 "supported_io_types": { 00:28:26.032 "read": true, 00:28:26.032 "write": true, 00:28:26.032 "unmap": true, 00:28:26.032 "flush": true, 00:28:26.032 "reset": true, 00:28:26.032 "nvme_admin": false, 00:28:26.032 "nvme_io": false, 00:28:26.032 "nvme_io_md": false, 00:28:26.032 "write_zeroes": true, 00:28:26.032 "zcopy": false, 00:28:26.032 "get_zone_info": false, 00:28:26.032 "zone_management": false, 00:28:26.032 "zone_append": false, 00:28:26.032 "compare": false, 00:28:26.032 "compare_and_write": false, 00:28:26.032 "abort": false, 00:28:26.032 "seek_hole": false, 00:28:26.032 "seek_data": false, 00:28:26.032 "copy": false, 00:28:26.032 "nvme_iov_md": false 00:28:26.032 }, 00:28:26.032 "memory_domains": [ 00:28:26.032 { 00:28:26.032 "dma_device_id": "system", 00:28:26.032 "dma_device_type": 1 00:28:26.032 }, 00:28:26.032 { 00:28:26.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.032 "dma_device_type": 2 00:28:26.032 }, 00:28:26.032 { 00:28:26.032 "dma_device_id": "system", 00:28:26.032 "dma_device_type": 1 00:28:26.032 }, 00:28:26.032 { 00:28:26.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.032 "dma_device_type": 2 00:28:26.032 }, 00:28:26.032 { 00:28:26.032 "dma_device_id": "system", 00:28:26.032 "dma_device_type": 1 00:28:26.032 }, 00:28:26.032 { 00:28:26.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:26.032 "dma_device_type": 2 00:28:26.032 } 00:28:26.032 ], 00:28:26.032 "driver_specific": { 00:28:26.032 "raid": { 00:28:26.032 "uuid": "1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7", 00:28:26.032 "strip_size_kb": 64, 00:28:26.032 "state": "online", 00:28:26.032 "raid_level": "concat", 00:28:26.032 "superblock": true, 00:28:26.032 "num_base_bdevs": 3, 00:28:26.032 "num_base_bdevs_discovered": 3, 00:28:26.032 "num_base_bdevs_operational": 3, 00:28:26.032 "base_bdevs_list": [ 00:28:26.032 { 00:28:26.032 "name": "pt1", 00:28:26.032 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:26.032 "is_configured": true, 00:28:26.032 "data_offset": 2048, 00:28:26.032 "data_size": 63488 00:28:26.032 }, 00:28:26.032 { 00:28:26.032 "name": "pt2", 00:28:26.032 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:26.032 "is_configured": true, 00:28:26.032 "data_offset": 2048, 00:28:26.032 "data_size": 63488 00:28:26.032 }, 00:28:26.032 { 00:28:26.032 "name": "pt3", 00:28:26.032 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:26.032 "is_configured": true, 00:28:26.032 "data_offset": 2048, 00:28:26.032 "data_size": 63488 00:28:26.032 } 00:28:26.032 ] 00:28:26.032 } 00:28:26.032 } 00:28:26.032 }' 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:28:26.032 pt2 00:28:26.032 pt3' 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.032 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.293 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:26.293 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:26.293 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:26.293 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.293 18:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:28:26.293 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.293 18:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:28:26.293 [2024-12-06 18:26:57.070665] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7 '!=' 1bbaa372-7c50-4f5a-9f82-6d71cbd5a1f7 ']' 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66596 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66596 ']' 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66596 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66596 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:26.293 killing process with pid 66596 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66596' 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66596 00:28:26.293 [2024-12-06 18:26:57.146258] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:26.293 [2024-12-06 18:26:57.146360] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:26.293 18:26:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66596 00:28:26.293 [2024-12-06 18:26:57.146421] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:26.293 [2024-12-06 18:26:57.146437] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:28:26.553 [2024-12-06 18:26:57.448914] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:27.933 18:26:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:28:27.933 00:28:27.933 real 0m5.353s 00:28:27.933 user 0m7.692s 00:28:27.933 sys 0m1.096s 00:28:27.933 18:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:27.933 18:26:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 ************************************ 00:28:27.933 END TEST raid_superblock_test 00:28:27.933 ************************************ 00:28:27.933 18:26:58 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:28:27.933 18:26:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:27.933 18:26:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:27.933 18:26:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 ************************************ 00:28:27.933 START TEST raid_read_error_test 00:28:27.933 ************************************ 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LzUvbWgJtQ 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66849 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66849 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66849 ']' 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.933 18:26:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:27.933 [2024-12-06 18:26:58.775027] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:27.933 [2024-12-06 18:26:58.775165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66849 ] 00:28:28.191 [2024-12-06 18:26:58.956000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.191 [2024-12-06 18:26:59.079341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.450 [2024-12-06 18:26:59.294617] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:28.450 [2024-12-06 18:26:59.294687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:28.727 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.727 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:28:28.727 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:28.727 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:28.727 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.727 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.727 BaseBdev1_malloc 00:28:28.727 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.727 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.728 true 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.728 [2024-12-06 18:26:59.666775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:28.728 [2024-12-06 18:26:59.666846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:28.728 [2024-12-06 18:26:59.666872] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:28.728 [2024-12-06 18:26:59.666887] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:28.728 [2024-12-06 18:26:59.669317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:28.728 [2024-12-06 18:26:59.669363] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:28.728 BaseBdev1 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.728 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.987 BaseBdev2_malloc 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.987 true 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.987 [2024-12-06 18:26:59.735609] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:28.987 [2024-12-06 18:26:59.735677] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:28.987 [2024-12-06 18:26:59.735697] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:28.987 [2024-12-06 18:26:59.735711] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:28.987 [2024-12-06 18:26:59.738082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:28.987 [2024-12-06 18:26:59.738132] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:28.987 BaseBdev2 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.987 BaseBdev3_malloc 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.987 true 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.987 [2024-12-06 18:26:59.817957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:28.987 [2024-12-06 18:26:59.818025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:28.987 [2024-12-06 18:26:59.818047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:28.987 [2024-12-06 18:26:59.818062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:28.987 [2024-12-06 18:26:59.820673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:28.987 [2024-12-06 18:26:59.820835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:28.987 BaseBdev3 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.987 [2024-12-06 18:26:59.830028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:28.987 [2024-12-06 18:26:59.832125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:28.987 [2024-12-06 18:26:59.832215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:28.987 [2024-12-06 18:26:59.832410] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:28.987 [2024-12-06 18:26:59.832423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:28.987 [2024-12-06 18:26:59.832693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:28:28.987 [2024-12-06 18:26:59.832849] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:28.987 [2024-12-06 18:26:59.832865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:28.987 [2024-12-06 18:26:59.833021] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:28.987 "name": "raid_bdev1", 00:28:28.987 "uuid": "40ed281c-509d-444a-9e00-40c9a67867b9", 00:28:28.987 "strip_size_kb": 64, 00:28:28.987 "state": "online", 00:28:28.987 "raid_level": "concat", 00:28:28.987 "superblock": true, 00:28:28.987 "num_base_bdevs": 3, 00:28:28.987 "num_base_bdevs_discovered": 3, 00:28:28.987 "num_base_bdevs_operational": 3, 00:28:28.987 "base_bdevs_list": [ 00:28:28.987 { 00:28:28.987 "name": "BaseBdev1", 00:28:28.987 "uuid": "a6823c4d-9510-5e97-b67c-6ab8710fe568", 00:28:28.987 "is_configured": true, 00:28:28.987 "data_offset": 2048, 00:28:28.987 "data_size": 63488 00:28:28.987 }, 00:28:28.987 { 00:28:28.987 "name": "BaseBdev2", 00:28:28.987 "uuid": "0cb172ae-ec60-53f7-aca0-068e6cf19b1e", 00:28:28.987 "is_configured": true, 00:28:28.987 "data_offset": 2048, 00:28:28.987 "data_size": 63488 00:28:28.987 }, 00:28:28.987 { 00:28:28.987 "name": "BaseBdev3", 00:28:28.987 "uuid": "0dce403f-b7f8-5f79-a665-1203bc6f6d1c", 00:28:28.987 "is_configured": true, 00:28:28.987 "data_offset": 2048, 00:28:28.987 "data_size": 63488 00:28:28.987 } 00:28:28.987 ] 00:28:28.987 }' 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:28.987 18:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:29.554 18:27:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:29.554 18:27:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:29.554 [2024-12-06 18:27:00.387223] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:30.493 "name": "raid_bdev1", 00:28:30.493 "uuid": "40ed281c-509d-444a-9e00-40c9a67867b9", 00:28:30.493 "strip_size_kb": 64, 00:28:30.493 "state": "online", 00:28:30.493 "raid_level": "concat", 00:28:30.493 "superblock": true, 00:28:30.493 "num_base_bdevs": 3, 00:28:30.493 "num_base_bdevs_discovered": 3, 00:28:30.493 "num_base_bdevs_operational": 3, 00:28:30.493 "base_bdevs_list": [ 00:28:30.493 { 00:28:30.493 "name": "BaseBdev1", 00:28:30.493 "uuid": "a6823c4d-9510-5e97-b67c-6ab8710fe568", 00:28:30.493 "is_configured": true, 00:28:30.493 "data_offset": 2048, 00:28:30.493 "data_size": 63488 00:28:30.493 }, 00:28:30.493 { 00:28:30.493 "name": "BaseBdev2", 00:28:30.493 "uuid": "0cb172ae-ec60-53f7-aca0-068e6cf19b1e", 00:28:30.493 "is_configured": true, 00:28:30.493 "data_offset": 2048, 00:28:30.493 "data_size": 63488 00:28:30.493 }, 00:28:30.493 { 00:28:30.493 "name": "BaseBdev3", 00:28:30.493 "uuid": "0dce403f-b7f8-5f79-a665-1203bc6f6d1c", 00:28:30.493 "is_configured": true, 00:28:30.493 "data_offset": 2048, 00:28:30.493 "data_size": 63488 00:28:30.493 } 00:28:30.493 ] 00:28:30.493 }' 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:30.493 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:31.062 [2024-12-06 18:27:01.765844] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:31.062 [2024-12-06 18:27:01.765886] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:31.062 [2024-12-06 18:27:01.768536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:31.062 [2024-12-06 18:27:01.768724] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:31.062 [2024-12-06 18:27:01.768781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:31.062 [2024-12-06 18:27:01.768796] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:31.062 { 00:28:31.062 "results": [ 00:28:31.062 { 00:28:31.062 "job": "raid_bdev1", 00:28:31.062 "core_mask": "0x1", 00:28:31.062 "workload": "randrw", 00:28:31.062 "percentage": 50, 00:28:31.062 "status": "finished", 00:28:31.062 "queue_depth": 1, 00:28:31.062 "io_size": 131072, 00:28:31.062 "runtime": 1.378655, 00:28:31.062 "iops": 15992.398388284233, 00:28:31.062 "mibps": 1999.0497985355291, 00:28:31.062 "io_failed": 1, 00:28:31.062 "io_timeout": 0, 00:28:31.062 "avg_latency_us": 86.20740377264875, 00:28:31.062 "min_latency_us": 27.142168674698794, 00:28:31.062 "max_latency_us": 1408.1028112449799 00:28:31.062 } 00:28:31.062 ], 00:28:31.062 "core_count": 1 00:28:31.062 } 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66849 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66849 ']' 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66849 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66849 00:28:31.062 killing process with pid 66849 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66849' 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66849 00:28:31.062 [2024-12-06 18:27:01.821894] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:31.062 18:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66849 00:28:31.319 [2024-12-06 18:27:02.057964] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LzUvbWgJtQ 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:28:32.699 00:28:32.699 real 0m4.615s 00:28:32.699 user 0m5.411s 00:28:32.699 sys 0m0.664s 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:32.699 ************************************ 00:28:32.699 END TEST raid_read_error_test 00:28:32.699 ************************************ 00:28:32.699 18:27:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.699 18:27:03 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:28:32.699 18:27:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:32.699 18:27:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:32.699 18:27:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:32.699 ************************************ 00:28:32.699 START TEST raid_write_error_test 00:28:32.699 ************************************ 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.DiStsNQl1d 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66999 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66999 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 66999 ']' 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.699 18:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:32.699 [2024-12-06 18:27:03.476651] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:32.699 [2024-12-06 18:27:03.476781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66999 ] 00:28:32.958 [2024-12-06 18:27:03.654724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.958 [2024-12-06 18:27:03.773269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.217 [2024-12-06 18:27:03.985407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:33.217 [2024-12-06 18:27:03.985477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:33.476 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.476 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:28:33.476 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:33.476 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:28:33.476 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.476 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.476 BaseBdev1_malloc 00:28:33.476 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.477 true 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.477 [2024-12-06 18:27:04.389881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:28:33.477 [2024-12-06 18:27:04.389954] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.477 [2024-12-06 18:27:04.389982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:28:33.477 [2024-12-06 18:27:04.389997] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.477 [2024-12-06 18:27:04.392545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.477 [2024-12-06 18:27:04.392596] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:28:33.477 BaseBdev1 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.477 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.736 BaseBdev2_malloc 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.736 true 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.736 [2024-12-06 18:27:04.460269] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:28:33.736 [2024-12-06 18:27:04.460332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.736 [2024-12-06 18:27:04.460355] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:28:33.736 [2024-12-06 18:27:04.460369] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.736 [2024-12-06 18:27:04.462939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.736 [2024-12-06 18:27:04.462989] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:28:33.736 BaseBdev2 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.736 BaseBdev3_malloc 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.736 true 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.736 [2024-12-06 18:27:04.546334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:28:33.736 [2024-12-06 18:27:04.546412] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:33.736 [2024-12-06 18:27:04.546434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:28:33.736 [2024-12-06 18:27:04.546448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:33.736 [2024-12-06 18:27:04.549226] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:33.736 [2024-12-06 18:27:04.549270] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:28:33.736 BaseBdev3 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.736 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.736 [2024-12-06 18:27:04.558411] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:33.736 [2024-12-06 18:27:04.560624] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:33.737 [2024-12-06 18:27:04.560722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:33.737 [2024-12-06 18:27:04.560927] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:33.737 [2024-12-06 18:27:04.560941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:28:33.737 [2024-12-06 18:27:04.561224] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:28:33.737 [2024-12-06 18:27:04.561395] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:33.737 [2024-12-06 18:27:04.561412] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:28:33.737 [2024-12-06 18:27:04.561581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:33.737 "name": "raid_bdev1", 00:28:33.737 "uuid": "26f789b9-023f-4fac-b6d1-a382e2e95846", 00:28:33.737 "strip_size_kb": 64, 00:28:33.737 "state": "online", 00:28:33.737 "raid_level": "concat", 00:28:33.737 "superblock": true, 00:28:33.737 "num_base_bdevs": 3, 00:28:33.737 "num_base_bdevs_discovered": 3, 00:28:33.737 "num_base_bdevs_operational": 3, 00:28:33.737 "base_bdevs_list": [ 00:28:33.737 { 00:28:33.737 "name": "BaseBdev1", 00:28:33.737 "uuid": "0a99676b-eebd-582b-b917-5de4e0987e4f", 00:28:33.737 "is_configured": true, 00:28:33.737 "data_offset": 2048, 00:28:33.737 "data_size": 63488 00:28:33.737 }, 00:28:33.737 { 00:28:33.737 "name": "BaseBdev2", 00:28:33.737 "uuid": "fc6df681-48ea-54cc-b1d0-98d51613ee20", 00:28:33.737 "is_configured": true, 00:28:33.737 "data_offset": 2048, 00:28:33.737 "data_size": 63488 00:28:33.737 }, 00:28:33.737 { 00:28:33.737 "name": "BaseBdev3", 00:28:33.737 "uuid": "067fec4c-7053-5f87-bba1-4eb8ca6a1942", 00:28:33.737 "is_configured": true, 00:28:33.737 "data_offset": 2048, 00:28:33.737 "data_size": 63488 00:28:33.737 } 00:28:33.737 ] 00:28:33.737 }' 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:33.737 18:27:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:34.305 18:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:28:34.305 18:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:28:34.305 [2024-12-06 18:27:05.143262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:35.268 "name": "raid_bdev1", 00:28:35.268 "uuid": "26f789b9-023f-4fac-b6d1-a382e2e95846", 00:28:35.268 "strip_size_kb": 64, 00:28:35.268 "state": "online", 00:28:35.268 "raid_level": "concat", 00:28:35.268 "superblock": true, 00:28:35.268 "num_base_bdevs": 3, 00:28:35.268 "num_base_bdevs_discovered": 3, 00:28:35.268 "num_base_bdevs_operational": 3, 00:28:35.268 "base_bdevs_list": [ 00:28:35.268 { 00:28:35.268 "name": "BaseBdev1", 00:28:35.268 "uuid": "0a99676b-eebd-582b-b917-5de4e0987e4f", 00:28:35.268 "is_configured": true, 00:28:35.268 "data_offset": 2048, 00:28:35.268 "data_size": 63488 00:28:35.268 }, 00:28:35.268 { 00:28:35.268 "name": "BaseBdev2", 00:28:35.268 "uuid": "fc6df681-48ea-54cc-b1d0-98d51613ee20", 00:28:35.268 "is_configured": true, 00:28:35.268 "data_offset": 2048, 00:28:35.268 "data_size": 63488 00:28:35.268 }, 00:28:35.268 { 00:28:35.268 "name": "BaseBdev3", 00:28:35.268 "uuid": "067fec4c-7053-5f87-bba1-4eb8ca6a1942", 00:28:35.268 "is_configured": true, 00:28:35.268 "data_offset": 2048, 00:28:35.268 "data_size": 63488 00:28:35.268 } 00:28:35.268 ] 00:28:35.268 }' 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:35.268 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:28:35.536 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.536 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:35.536 [2024-12-06 18:27:06.475894] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:28:35.536 [2024-12-06 18:27:06.475931] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:35.536 [2024-12-06 18:27:06.478734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:35.536 [2024-12-06 18:27:06.478788] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:35.536 [2024-12-06 18:27:06.478829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:35.536 [2024-12-06 18:27:06.478845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:28:35.536 { 00:28:35.536 "results": [ 00:28:35.536 { 00:28:35.536 "job": "raid_bdev1", 00:28:35.536 "core_mask": "0x1", 00:28:35.536 "workload": "randrw", 00:28:35.536 "percentage": 50, 00:28:35.536 "status": "finished", 00:28:35.536 "queue_depth": 1, 00:28:35.536 "io_size": 131072, 00:28:35.536 "runtime": 1.332837, 00:28:35.536 "iops": 15665.0813265238, 00:28:35.536 "mibps": 1958.135165815475, 00:28:35.536 "io_failed": 1, 00:28:35.536 "io_timeout": 0, 00:28:35.536 "avg_latency_us": 88.11101447937344, 00:28:35.536 "min_latency_us": 27.347791164658634, 00:28:35.536 "max_latency_us": 1559.4409638554216 00:28:35.536 } 00:28:35.536 ], 00:28:35.536 "core_count": 1 00:28:35.536 } 00:28:35.536 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.536 18:27:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66999 00:28:35.536 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 66999 ']' 00:28:35.536 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 66999 00:28:35.793 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:28:35.793 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:35.793 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66999 00:28:35.793 killing process with pid 66999 00:28:35.793 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:35.793 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:35.793 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66999' 00:28:35.793 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 66999 00:28:35.793 [2024-12-06 18:27:06.533575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:35.793 18:27:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 66999 00:28:36.051 [2024-12-06 18:27:06.775351] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:37.426 18:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.DiStsNQl1d 00:28:37.426 18:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:28:37.426 18:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:28:37.426 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:28:37.426 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:28:37.426 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:37.426 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:28:37.426 18:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:28:37.426 00:28:37.426 real 0m4.640s 00:28:37.426 user 0m5.505s 00:28:37.426 sys 0m0.624s 00:28:37.426 18:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:37.426 18:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.426 ************************************ 00:28:37.426 END TEST raid_write_error_test 00:28:37.426 ************************************ 00:28:37.426 18:27:08 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:28:37.426 18:27:08 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:28:37.426 18:27:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:37.426 18:27:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:37.426 18:27:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:37.426 ************************************ 00:28:37.426 START TEST raid_state_function_test 00:28:37.426 ************************************ 00:28:37.426 18:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:28:37.426 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:37.426 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67138 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:37.427 Process raid pid: 67138 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67138' 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67138 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67138 ']' 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:37.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:37.427 18:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:37.427 [2024-12-06 18:27:08.181951] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:37.427 [2024-12-06 18:27:08.182091] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:37.427 [2024-12-06 18:27:08.348649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.686 [2024-12-06 18:27:08.523004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.945 [2024-12-06 18:27:08.729209] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:37.945 [2024-12-06 18:27:08.729249] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.204 [2024-12-06 18:27:09.110588] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:38.204 [2024-12-06 18:27:09.110661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:38.204 [2024-12-06 18:27:09.110673] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:38.204 [2024-12-06 18:27:09.110686] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:38.204 [2024-12-06 18:27:09.110694] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:38.204 [2024-12-06 18:27:09.110706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:38.204 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.464 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:38.464 "name": "Existed_Raid", 00:28:38.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.464 "strip_size_kb": 0, 00:28:38.464 "state": "configuring", 00:28:38.464 "raid_level": "raid1", 00:28:38.464 "superblock": false, 00:28:38.464 "num_base_bdevs": 3, 00:28:38.464 "num_base_bdevs_discovered": 0, 00:28:38.464 "num_base_bdevs_operational": 3, 00:28:38.464 "base_bdevs_list": [ 00:28:38.464 { 00:28:38.464 "name": "BaseBdev1", 00:28:38.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.464 "is_configured": false, 00:28:38.464 "data_offset": 0, 00:28:38.464 "data_size": 0 00:28:38.464 }, 00:28:38.464 { 00:28:38.464 "name": "BaseBdev2", 00:28:38.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.464 "is_configured": false, 00:28:38.464 "data_offset": 0, 00:28:38.464 "data_size": 0 00:28:38.464 }, 00:28:38.464 { 00:28:38.464 "name": "BaseBdev3", 00:28:38.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.464 "is_configured": false, 00:28:38.464 "data_offset": 0, 00:28:38.464 "data_size": 0 00:28:38.464 } 00:28:38.464 ] 00:28:38.464 }' 00:28:38.464 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:38.464 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.723 [2024-12-06 18:27:09.581880] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:38.723 [2024-12-06 18:27:09.581925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.723 [2024-12-06 18:27:09.593863] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:38.723 [2024-12-06 18:27:09.593922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:38.723 [2024-12-06 18:27:09.593933] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:38.723 [2024-12-06 18:27:09.593945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:38.723 [2024-12-06 18:27:09.593954] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:38.723 [2024-12-06 18:27:09.593966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.723 [2024-12-06 18:27:09.645003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:38.723 BaseBdev1 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.723 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.723 [ 00:28:38.723 { 00:28:38.723 "name": "BaseBdev1", 00:28:38.723 "aliases": [ 00:28:38.982 "bac045b4-57eb-4f49-9d59-b0c06c6ac21e" 00:28:38.982 ], 00:28:38.982 "product_name": "Malloc disk", 00:28:38.982 "block_size": 512, 00:28:38.982 "num_blocks": 65536, 00:28:38.982 "uuid": "bac045b4-57eb-4f49-9d59-b0c06c6ac21e", 00:28:38.982 "assigned_rate_limits": { 00:28:38.982 "rw_ios_per_sec": 0, 00:28:38.982 "rw_mbytes_per_sec": 0, 00:28:38.982 "r_mbytes_per_sec": 0, 00:28:38.982 "w_mbytes_per_sec": 0 00:28:38.982 }, 00:28:38.982 "claimed": true, 00:28:38.982 "claim_type": "exclusive_write", 00:28:38.982 "zoned": false, 00:28:38.982 "supported_io_types": { 00:28:38.982 "read": true, 00:28:38.982 "write": true, 00:28:38.982 "unmap": true, 00:28:38.982 "flush": true, 00:28:38.982 "reset": true, 00:28:38.982 "nvme_admin": false, 00:28:38.982 "nvme_io": false, 00:28:38.983 "nvme_io_md": false, 00:28:38.983 "write_zeroes": true, 00:28:38.983 "zcopy": true, 00:28:38.983 "get_zone_info": false, 00:28:38.983 "zone_management": false, 00:28:38.983 "zone_append": false, 00:28:38.983 "compare": false, 00:28:38.983 "compare_and_write": false, 00:28:38.983 "abort": true, 00:28:38.983 "seek_hole": false, 00:28:38.983 "seek_data": false, 00:28:38.983 "copy": true, 00:28:38.983 "nvme_iov_md": false 00:28:38.983 }, 00:28:38.983 "memory_domains": [ 00:28:38.983 { 00:28:38.983 "dma_device_id": "system", 00:28:38.983 "dma_device_type": 1 00:28:38.983 }, 00:28:38.983 { 00:28:38.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:38.983 "dma_device_type": 2 00:28:38.983 } 00:28:38.983 ], 00:28:38.983 "driver_specific": {} 00:28:38.983 } 00:28:38.983 ] 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:38.983 "name": "Existed_Raid", 00:28:38.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.983 "strip_size_kb": 0, 00:28:38.983 "state": "configuring", 00:28:38.983 "raid_level": "raid1", 00:28:38.983 "superblock": false, 00:28:38.983 "num_base_bdevs": 3, 00:28:38.983 "num_base_bdevs_discovered": 1, 00:28:38.983 "num_base_bdevs_operational": 3, 00:28:38.983 "base_bdevs_list": [ 00:28:38.983 { 00:28:38.983 "name": "BaseBdev1", 00:28:38.983 "uuid": "bac045b4-57eb-4f49-9d59-b0c06c6ac21e", 00:28:38.983 "is_configured": true, 00:28:38.983 "data_offset": 0, 00:28:38.983 "data_size": 65536 00:28:38.983 }, 00:28:38.983 { 00:28:38.983 "name": "BaseBdev2", 00:28:38.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.983 "is_configured": false, 00:28:38.983 "data_offset": 0, 00:28:38.983 "data_size": 0 00:28:38.983 }, 00:28:38.983 { 00:28:38.983 "name": "BaseBdev3", 00:28:38.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:38.983 "is_configured": false, 00:28:38.983 "data_offset": 0, 00:28:38.983 "data_size": 0 00:28:38.983 } 00:28:38.983 ] 00:28:38.983 }' 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:38.983 18:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.242 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:39.242 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.242 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.242 [2024-12-06 18:27:10.088424] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:39.242 [2024-12-06 18:27:10.088485] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:39.242 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.242 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:39.242 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.242 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.242 [2024-12-06 18:27:10.096440] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:39.242 [2024-12-06 18:27:10.098513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:39.242 [2024-12-06 18:27:10.098564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:39.242 [2024-12-06 18:27:10.098575] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:39.243 [2024-12-06 18:27:10.098588] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:39.243 "name": "Existed_Raid", 00:28:39.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.243 "strip_size_kb": 0, 00:28:39.243 "state": "configuring", 00:28:39.243 "raid_level": "raid1", 00:28:39.243 "superblock": false, 00:28:39.243 "num_base_bdevs": 3, 00:28:39.243 "num_base_bdevs_discovered": 1, 00:28:39.243 "num_base_bdevs_operational": 3, 00:28:39.243 "base_bdevs_list": [ 00:28:39.243 { 00:28:39.243 "name": "BaseBdev1", 00:28:39.243 "uuid": "bac045b4-57eb-4f49-9d59-b0c06c6ac21e", 00:28:39.243 "is_configured": true, 00:28:39.243 "data_offset": 0, 00:28:39.243 "data_size": 65536 00:28:39.243 }, 00:28:39.243 { 00:28:39.243 "name": "BaseBdev2", 00:28:39.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.243 "is_configured": false, 00:28:39.243 "data_offset": 0, 00:28:39.243 "data_size": 0 00:28:39.243 }, 00:28:39.243 { 00:28:39.243 "name": "BaseBdev3", 00:28:39.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.243 "is_configured": false, 00:28:39.243 "data_offset": 0, 00:28:39.243 "data_size": 0 00:28:39.243 } 00:28:39.243 ] 00:28:39.243 }' 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:39.243 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.811 [2024-12-06 18:27:10.561213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:39.811 BaseBdev2 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.811 [ 00:28:39.811 { 00:28:39.811 "name": "BaseBdev2", 00:28:39.811 "aliases": [ 00:28:39.811 "68cf24e6-9461-43f6-881b-298d5fbd1ea1" 00:28:39.811 ], 00:28:39.811 "product_name": "Malloc disk", 00:28:39.811 "block_size": 512, 00:28:39.811 "num_blocks": 65536, 00:28:39.811 "uuid": "68cf24e6-9461-43f6-881b-298d5fbd1ea1", 00:28:39.811 "assigned_rate_limits": { 00:28:39.811 "rw_ios_per_sec": 0, 00:28:39.811 "rw_mbytes_per_sec": 0, 00:28:39.811 "r_mbytes_per_sec": 0, 00:28:39.811 "w_mbytes_per_sec": 0 00:28:39.811 }, 00:28:39.811 "claimed": true, 00:28:39.811 "claim_type": "exclusive_write", 00:28:39.811 "zoned": false, 00:28:39.811 "supported_io_types": { 00:28:39.811 "read": true, 00:28:39.811 "write": true, 00:28:39.811 "unmap": true, 00:28:39.811 "flush": true, 00:28:39.811 "reset": true, 00:28:39.811 "nvme_admin": false, 00:28:39.811 "nvme_io": false, 00:28:39.811 "nvme_io_md": false, 00:28:39.811 "write_zeroes": true, 00:28:39.811 "zcopy": true, 00:28:39.811 "get_zone_info": false, 00:28:39.811 "zone_management": false, 00:28:39.811 "zone_append": false, 00:28:39.811 "compare": false, 00:28:39.811 "compare_and_write": false, 00:28:39.811 "abort": true, 00:28:39.811 "seek_hole": false, 00:28:39.811 "seek_data": false, 00:28:39.811 "copy": true, 00:28:39.811 "nvme_iov_md": false 00:28:39.811 }, 00:28:39.811 "memory_domains": [ 00:28:39.811 { 00:28:39.811 "dma_device_id": "system", 00:28:39.811 "dma_device_type": 1 00:28:39.811 }, 00:28:39.811 { 00:28:39.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:39.811 "dma_device_type": 2 00:28:39.811 } 00:28:39.811 ], 00:28:39.811 "driver_specific": {} 00:28:39.811 } 00:28:39.811 ] 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:39.811 "name": "Existed_Raid", 00:28:39.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.811 "strip_size_kb": 0, 00:28:39.811 "state": "configuring", 00:28:39.811 "raid_level": "raid1", 00:28:39.811 "superblock": false, 00:28:39.811 "num_base_bdevs": 3, 00:28:39.811 "num_base_bdevs_discovered": 2, 00:28:39.811 "num_base_bdevs_operational": 3, 00:28:39.811 "base_bdevs_list": [ 00:28:39.811 { 00:28:39.811 "name": "BaseBdev1", 00:28:39.811 "uuid": "bac045b4-57eb-4f49-9d59-b0c06c6ac21e", 00:28:39.811 "is_configured": true, 00:28:39.811 "data_offset": 0, 00:28:39.811 "data_size": 65536 00:28:39.811 }, 00:28:39.811 { 00:28:39.811 "name": "BaseBdev2", 00:28:39.811 "uuid": "68cf24e6-9461-43f6-881b-298d5fbd1ea1", 00:28:39.811 "is_configured": true, 00:28:39.811 "data_offset": 0, 00:28:39.811 "data_size": 65536 00:28:39.811 }, 00:28:39.811 { 00:28:39.811 "name": "BaseBdev3", 00:28:39.811 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:39.811 "is_configured": false, 00:28:39.811 "data_offset": 0, 00:28:39.811 "data_size": 0 00:28:39.811 } 00:28:39.811 ] 00:28:39.811 }' 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:39.811 18:27:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.381 [2024-12-06 18:27:11.093170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:40.381 [2024-12-06 18:27:11.093227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:40.381 [2024-12-06 18:27:11.093244] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:40.381 [2024-12-06 18:27:11.093531] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:40.381 [2024-12-06 18:27:11.093709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:40.381 [2024-12-06 18:27:11.093720] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:40.381 [2024-12-06 18:27:11.093996] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:40.381 BaseBdev3 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.381 [ 00:28:40.381 { 00:28:40.381 "name": "BaseBdev3", 00:28:40.381 "aliases": [ 00:28:40.381 "62f4a2c0-704f-4d8f-b761-b4d8371c38d1" 00:28:40.381 ], 00:28:40.381 "product_name": "Malloc disk", 00:28:40.381 "block_size": 512, 00:28:40.381 "num_blocks": 65536, 00:28:40.381 "uuid": "62f4a2c0-704f-4d8f-b761-b4d8371c38d1", 00:28:40.381 "assigned_rate_limits": { 00:28:40.381 "rw_ios_per_sec": 0, 00:28:40.381 "rw_mbytes_per_sec": 0, 00:28:40.381 "r_mbytes_per_sec": 0, 00:28:40.381 "w_mbytes_per_sec": 0 00:28:40.381 }, 00:28:40.381 "claimed": true, 00:28:40.381 "claim_type": "exclusive_write", 00:28:40.381 "zoned": false, 00:28:40.381 "supported_io_types": { 00:28:40.381 "read": true, 00:28:40.381 "write": true, 00:28:40.381 "unmap": true, 00:28:40.381 "flush": true, 00:28:40.381 "reset": true, 00:28:40.381 "nvme_admin": false, 00:28:40.381 "nvme_io": false, 00:28:40.381 "nvme_io_md": false, 00:28:40.381 "write_zeroes": true, 00:28:40.381 "zcopy": true, 00:28:40.381 "get_zone_info": false, 00:28:40.381 "zone_management": false, 00:28:40.381 "zone_append": false, 00:28:40.381 "compare": false, 00:28:40.381 "compare_and_write": false, 00:28:40.381 "abort": true, 00:28:40.381 "seek_hole": false, 00:28:40.381 "seek_data": false, 00:28:40.381 "copy": true, 00:28:40.381 "nvme_iov_md": false 00:28:40.381 }, 00:28:40.381 "memory_domains": [ 00:28:40.381 { 00:28:40.381 "dma_device_id": "system", 00:28:40.381 "dma_device_type": 1 00:28:40.381 }, 00:28:40.381 { 00:28:40.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:40.381 "dma_device_type": 2 00:28:40.381 } 00:28:40.381 ], 00:28:40.381 "driver_specific": {} 00:28:40.381 } 00:28:40.381 ] 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:40.381 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:40.382 "name": "Existed_Raid", 00:28:40.382 "uuid": "0ac06728-c625-4395-8df7-b65baf2e62cc", 00:28:40.382 "strip_size_kb": 0, 00:28:40.382 "state": "online", 00:28:40.382 "raid_level": "raid1", 00:28:40.382 "superblock": false, 00:28:40.382 "num_base_bdevs": 3, 00:28:40.382 "num_base_bdevs_discovered": 3, 00:28:40.382 "num_base_bdevs_operational": 3, 00:28:40.382 "base_bdevs_list": [ 00:28:40.382 { 00:28:40.382 "name": "BaseBdev1", 00:28:40.382 "uuid": "bac045b4-57eb-4f49-9d59-b0c06c6ac21e", 00:28:40.382 "is_configured": true, 00:28:40.382 "data_offset": 0, 00:28:40.382 "data_size": 65536 00:28:40.382 }, 00:28:40.382 { 00:28:40.382 "name": "BaseBdev2", 00:28:40.382 "uuid": "68cf24e6-9461-43f6-881b-298d5fbd1ea1", 00:28:40.382 "is_configured": true, 00:28:40.382 "data_offset": 0, 00:28:40.382 "data_size": 65536 00:28:40.382 }, 00:28:40.382 { 00:28:40.382 "name": "BaseBdev3", 00:28:40.382 "uuid": "62f4a2c0-704f-4d8f-b761-b4d8371c38d1", 00:28:40.382 "is_configured": true, 00:28:40.382 "data_offset": 0, 00:28:40.382 "data_size": 65536 00:28:40.382 } 00:28:40.382 ] 00:28:40.382 }' 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:40.382 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.641 [2024-12-06 18:27:11.548872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:40.641 "name": "Existed_Raid", 00:28:40.641 "aliases": [ 00:28:40.641 "0ac06728-c625-4395-8df7-b65baf2e62cc" 00:28:40.641 ], 00:28:40.641 "product_name": "Raid Volume", 00:28:40.641 "block_size": 512, 00:28:40.641 "num_blocks": 65536, 00:28:40.641 "uuid": "0ac06728-c625-4395-8df7-b65baf2e62cc", 00:28:40.641 "assigned_rate_limits": { 00:28:40.641 "rw_ios_per_sec": 0, 00:28:40.641 "rw_mbytes_per_sec": 0, 00:28:40.641 "r_mbytes_per_sec": 0, 00:28:40.641 "w_mbytes_per_sec": 0 00:28:40.641 }, 00:28:40.641 "claimed": false, 00:28:40.641 "zoned": false, 00:28:40.641 "supported_io_types": { 00:28:40.641 "read": true, 00:28:40.641 "write": true, 00:28:40.641 "unmap": false, 00:28:40.641 "flush": false, 00:28:40.641 "reset": true, 00:28:40.641 "nvme_admin": false, 00:28:40.641 "nvme_io": false, 00:28:40.641 "nvme_io_md": false, 00:28:40.641 "write_zeroes": true, 00:28:40.641 "zcopy": false, 00:28:40.641 "get_zone_info": false, 00:28:40.641 "zone_management": false, 00:28:40.641 "zone_append": false, 00:28:40.641 "compare": false, 00:28:40.641 "compare_and_write": false, 00:28:40.641 "abort": false, 00:28:40.641 "seek_hole": false, 00:28:40.641 "seek_data": false, 00:28:40.641 "copy": false, 00:28:40.641 "nvme_iov_md": false 00:28:40.641 }, 00:28:40.641 "memory_domains": [ 00:28:40.641 { 00:28:40.641 "dma_device_id": "system", 00:28:40.641 "dma_device_type": 1 00:28:40.641 }, 00:28:40.641 { 00:28:40.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:40.641 "dma_device_type": 2 00:28:40.641 }, 00:28:40.641 { 00:28:40.641 "dma_device_id": "system", 00:28:40.641 "dma_device_type": 1 00:28:40.641 }, 00:28:40.641 { 00:28:40.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:40.641 "dma_device_type": 2 00:28:40.641 }, 00:28:40.641 { 00:28:40.641 "dma_device_id": "system", 00:28:40.641 "dma_device_type": 1 00:28:40.641 }, 00:28:40.641 { 00:28:40.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:40.641 "dma_device_type": 2 00:28:40.641 } 00:28:40.641 ], 00:28:40.641 "driver_specific": { 00:28:40.641 "raid": { 00:28:40.641 "uuid": "0ac06728-c625-4395-8df7-b65baf2e62cc", 00:28:40.641 "strip_size_kb": 0, 00:28:40.641 "state": "online", 00:28:40.641 "raid_level": "raid1", 00:28:40.641 "superblock": false, 00:28:40.641 "num_base_bdevs": 3, 00:28:40.641 "num_base_bdevs_discovered": 3, 00:28:40.641 "num_base_bdevs_operational": 3, 00:28:40.641 "base_bdevs_list": [ 00:28:40.641 { 00:28:40.641 "name": "BaseBdev1", 00:28:40.641 "uuid": "bac045b4-57eb-4f49-9d59-b0c06c6ac21e", 00:28:40.641 "is_configured": true, 00:28:40.641 "data_offset": 0, 00:28:40.641 "data_size": 65536 00:28:40.641 }, 00:28:40.641 { 00:28:40.641 "name": "BaseBdev2", 00:28:40.641 "uuid": "68cf24e6-9461-43f6-881b-298d5fbd1ea1", 00:28:40.641 "is_configured": true, 00:28:40.641 "data_offset": 0, 00:28:40.641 "data_size": 65536 00:28:40.641 }, 00:28:40.641 { 00:28:40.641 "name": "BaseBdev3", 00:28:40.641 "uuid": "62f4a2c0-704f-4d8f-b761-b4d8371c38d1", 00:28:40.641 "is_configured": true, 00:28:40.641 "data_offset": 0, 00:28:40.641 "data_size": 65536 00:28:40.641 } 00:28:40.641 ] 00:28:40.641 } 00:28:40.641 } 00:28:40.641 }' 00:28:40.641 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:40.901 BaseBdev2 00:28:40.901 BaseBdev3' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.901 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:40.901 [2024-12-06 18:27:11.772325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.160 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:41.160 "name": "Existed_Raid", 00:28:41.160 "uuid": "0ac06728-c625-4395-8df7-b65baf2e62cc", 00:28:41.160 "strip_size_kb": 0, 00:28:41.160 "state": "online", 00:28:41.160 "raid_level": "raid1", 00:28:41.160 "superblock": false, 00:28:41.160 "num_base_bdevs": 3, 00:28:41.160 "num_base_bdevs_discovered": 2, 00:28:41.160 "num_base_bdevs_operational": 2, 00:28:41.160 "base_bdevs_list": [ 00:28:41.160 { 00:28:41.160 "name": null, 00:28:41.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.160 "is_configured": false, 00:28:41.160 "data_offset": 0, 00:28:41.160 "data_size": 65536 00:28:41.160 }, 00:28:41.161 { 00:28:41.161 "name": "BaseBdev2", 00:28:41.161 "uuid": "68cf24e6-9461-43f6-881b-298d5fbd1ea1", 00:28:41.161 "is_configured": true, 00:28:41.161 "data_offset": 0, 00:28:41.161 "data_size": 65536 00:28:41.161 }, 00:28:41.161 { 00:28:41.161 "name": "BaseBdev3", 00:28:41.161 "uuid": "62f4a2c0-704f-4d8f-b761-b4d8371c38d1", 00:28:41.161 "is_configured": true, 00:28:41.161 "data_offset": 0, 00:28:41.161 "data_size": 65536 00:28:41.161 } 00:28:41.161 ] 00:28:41.161 }' 00:28:41.161 18:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:41.161 18:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.420 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.420 [2024-12-06 18:27:12.344316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:41.680 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.681 [2024-12-06 18:27:12.491610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:41.681 [2024-12-06 18:27:12.491720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:41.681 [2024-12-06 18:27:12.587389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:41.681 [2024-12-06 18:27:12.587446] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:41.681 [2024-12-06 18:27:12.587461] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.681 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 BaseBdev2 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 [ 00:28:41.966 { 00:28:41.966 "name": "BaseBdev2", 00:28:41.966 "aliases": [ 00:28:41.966 "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645" 00:28:41.966 ], 00:28:41.966 "product_name": "Malloc disk", 00:28:41.966 "block_size": 512, 00:28:41.966 "num_blocks": 65536, 00:28:41.966 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:41.966 "assigned_rate_limits": { 00:28:41.966 "rw_ios_per_sec": 0, 00:28:41.966 "rw_mbytes_per_sec": 0, 00:28:41.966 "r_mbytes_per_sec": 0, 00:28:41.966 "w_mbytes_per_sec": 0 00:28:41.966 }, 00:28:41.966 "claimed": false, 00:28:41.966 "zoned": false, 00:28:41.966 "supported_io_types": { 00:28:41.966 "read": true, 00:28:41.966 "write": true, 00:28:41.966 "unmap": true, 00:28:41.966 "flush": true, 00:28:41.966 "reset": true, 00:28:41.966 "nvme_admin": false, 00:28:41.966 "nvme_io": false, 00:28:41.966 "nvme_io_md": false, 00:28:41.966 "write_zeroes": true, 00:28:41.966 "zcopy": true, 00:28:41.966 "get_zone_info": false, 00:28:41.966 "zone_management": false, 00:28:41.966 "zone_append": false, 00:28:41.966 "compare": false, 00:28:41.966 "compare_and_write": false, 00:28:41.966 "abort": true, 00:28:41.966 "seek_hole": false, 00:28:41.966 "seek_data": false, 00:28:41.966 "copy": true, 00:28:41.966 "nvme_iov_md": false 00:28:41.966 }, 00:28:41.966 "memory_domains": [ 00:28:41.966 { 00:28:41.966 "dma_device_id": "system", 00:28:41.966 "dma_device_type": 1 00:28:41.966 }, 00:28:41.966 { 00:28:41.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:41.966 "dma_device_type": 2 00:28:41.966 } 00:28:41.966 ], 00:28:41.966 "driver_specific": {} 00:28:41.966 } 00:28:41.966 ] 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 BaseBdev3 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:41.966 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.967 [ 00:28:41.967 { 00:28:41.967 "name": "BaseBdev3", 00:28:41.967 "aliases": [ 00:28:41.967 "35b029cd-1c84-4c76-9d62-a114944aa56f" 00:28:41.967 ], 00:28:41.967 "product_name": "Malloc disk", 00:28:41.967 "block_size": 512, 00:28:41.967 "num_blocks": 65536, 00:28:41.967 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:41.967 "assigned_rate_limits": { 00:28:41.967 "rw_ios_per_sec": 0, 00:28:41.967 "rw_mbytes_per_sec": 0, 00:28:41.967 "r_mbytes_per_sec": 0, 00:28:41.967 "w_mbytes_per_sec": 0 00:28:41.967 }, 00:28:41.967 "claimed": false, 00:28:41.967 "zoned": false, 00:28:41.967 "supported_io_types": { 00:28:41.967 "read": true, 00:28:41.967 "write": true, 00:28:41.967 "unmap": true, 00:28:41.967 "flush": true, 00:28:41.967 "reset": true, 00:28:41.967 "nvme_admin": false, 00:28:41.967 "nvme_io": false, 00:28:41.967 "nvme_io_md": false, 00:28:41.967 "write_zeroes": true, 00:28:41.967 "zcopy": true, 00:28:41.967 "get_zone_info": false, 00:28:41.967 "zone_management": false, 00:28:41.967 "zone_append": false, 00:28:41.967 "compare": false, 00:28:41.967 "compare_and_write": false, 00:28:41.967 "abort": true, 00:28:41.967 "seek_hole": false, 00:28:41.967 "seek_data": false, 00:28:41.967 "copy": true, 00:28:41.967 "nvme_iov_md": false 00:28:41.967 }, 00:28:41.967 "memory_domains": [ 00:28:41.967 { 00:28:41.967 "dma_device_id": "system", 00:28:41.967 "dma_device_type": 1 00:28:41.967 }, 00:28:41.967 { 00:28:41.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:41.967 "dma_device_type": 2 00:28:41.967 } 00:28:41.967 ], 00:28:41.967 "driver_specific": {} 00:28:41.967 } 00:28:41.967 ] 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.967 [2024-12-06 18:27:12.811472] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:41.967 [2024-12-06 18:27:12.811528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:41.967 [2024-12-06 18:27:12.811549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:41.967 [2024-12-06 18:27:12.813613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:41.967 "name": "Existed_Raid", 00:28:41.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.967 "strip_size_kb": 0, 00:28:41.967 "state": "configuring", 00:28:41.967 "raid_level": "raid1", 00:28:41.967 "superblock": false, 00:28:41.967 "num_base_bdevs": 3, 00:28:41.967 "num_base_bdevs_discovered": 2, 00:28:41.967 "num_base_bdevs_operational": 3, 00:28:41.967 "base_bdevs_list": [ 00:28:41.967 { 00:28:41.967 "name": "BaseBdev1", 00:28:41.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:41.967 "is_configured": false, 00:28:41.967 "data_offset": 0, 00:28:41.967 "data_size": 0 00:28:41.967 }, 00:28:41.967 { 00:28:41.967 "name": "BaseBdev2", 00:28:41.967 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:41.967 "is_configured": true, 00:28:41.967 "data_offset": 0, 00:28:41.967 "data_size": 65536 00:28:41.967 }, 00:28:41.967 { 00:28:41.967 "name": "BaseBdev3", 00:28:41.967 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:41.967 "is_configured": true, 00:28:41.967 "data_offset": 0, 00:28:41.967 "data_size": 65536 00:28:41.967 } 00:28:41.967 ] 00:28:41.967 }' 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:41.967 18:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.536 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:42.536 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.536 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.536 [2024-12-06 18:27:13.266907] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:42.537 "name": "Existed_Raid", 00:28:42.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.537 "strip_size_kb": 0, 00:28:42.537 "state": "configuring", 00:28:42.537 "raid_level": "raid1", 00:28:42.537 "superblock": false, 00:28:42.537 "num_base_bdevs": 3, 00:28:42.537 "num_base_bdevs_discovered": 1, 00:28:42.537 "num_base_bdevs_operational": 3, 00:28:42.537 "base_bdevs_list": [ 00:28:42.537 { 00:28:42.537 "name": "BaseBdev1", 00:28:42.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:42.537 "is_configured": false, 00:28:42.537 "data_offset": 0, 00:28:42.537 "data_size": 0 00:28:42.537 }, 00:28:42.537 { 00:28:42.537 "name": null, 00:28:42.537 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:42.537 "is_configured": false, 00:28:42.537 "data_offset": 0, 00:28:42.537 "data_size": 65536 00:28:42.537 }, 00:28:42.537 { 00:28:42.537 "name": "BaseBdev3", 00:28:42.537 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:42.537 "is_configured": true, 00:28:42.537 "data_offset": 0, 00:28:42.537 "data_size": 65536 00:28:42.537 } 00:28:42.537 ] 00:28:42.537 }' 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:42.537 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.795 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:42.795 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:42.795 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.795 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.796 [2024-12-06 18:27:13.728125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:42.796 BaseBdev1 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.796 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.054 [ 00:28:43.054 { 00:28:43.054 "name": "BaseBdev1", 00:28:43.054 "aliases": [ 00:28:43.054 "7b0ca7ab-f91c-428a-9f54-09479324be8b" 00:28:43.054 ], 00:28:43.054 "product_name": "Malloc disk", 00:28:43.054 "block_size": 512, 00:28:43.054 "num_blocks": 65536, 00:28:43.054 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:43.054 "assigned_rate_limits": { 00:28:43.054 "rw_ios_per_sec": 0, 00:28:43.054 "rw_mbytes_per_sec": 0, 00:28:43.054 "r_mbytes_per_sec": 0, 00:28:43.054 "w_mbytes_per_sec": 0 00:28:43.054 }, 00:28:43.054 "claimed": true, 00:28:43.054 "claim_type": "exclusive_write", 00:28:43.054 "zoned": false, 00:28:43.054 "supported_io_types": { 00:28:43.054 "read": true, 00:28:43.054 "write": true, 00:28:43.054 "unmap": true, 00:28:43.054 "flush": true, 00:28:43.054 "reset": true, 00:28:43.054 "nvme_admin": false, 00:28:43.054 "nvme_io": false, 00:28:43.054 "nvme_io_md": false, 00:28:43.054 "write_zeroes": true, 00:28:43.054 "zcopy": true, 00:28:43.054 "get_zone_info": false, 00:28:43.054 "zone_management": false, 00:28:43.054 "zone_append": false, 00:28:43.054 "compare": false, 00:28:43.054 "compare_and_write": false, 00:28:43.054 "abort": true, 00:28:43.054 "seek_hole": false, 00:28:43.054 "seek_data": false, 00:28:43.054 "copy": true, 00:28:43.054 "nvme_iov_md": false 00:28:43.054 }, 00:28:43.054 "memory_domains": [ 00:28:43.054 { 00:28:43.054 "dma_device_id": "system", 00:28:43.054 "dma_device_type": 1 00:28:43.054 }, 00:28:43.054 { 00:28:43.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:43.054 "dma_device_type": 2 00:28:43.054 } 00:28:43.054 ], 00:28:43.054 "driver_specific": {} 00:28:43.054 } 00:28:43.054 ] 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:43.054 "name": "Existed_Raid", 00:28:43.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.054 "strip_size_kb": 0, 00:28:43.054 "state": "configuring", 00:28:43.054 "raid_level": "raid1", 00:28:43.054 "superblock": false, 00:28:43.054 "num_base_bdevs": 3, 00:28:43.054 "num_base_bdevs_discovered": 2, 00:28:43.054 "num_base_bdevs_operational": 3, 00:28:43.054 "base_bdevs_list": [ 00:28:43.054 { 00:28:43.054 "name": "BaseBdev1", 00:28:43.054 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:43.054 "is_configured": true, 00:28:43.054 "data_offset": 0, 00:28:43.054 "data_size": 65536 00:28:43.054 }, 00:28:43.054 { 00:28:43.054 "name": null, 00:28:43.054 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:43.054 "is_configured": false, 00:28:43.054 "data_offset": 0, 00:28:43.054 "data_size": 65536 00:28:43.054 }, 00:28:43.054 { 00:28:43.054 "name": "BaseBdev3", 00:28:43.054 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:43.054 "is_configured": true, 00:28:43.054 "data_offset": 0, 00:28:43.054 "data_size": 65536 00:28:43.054 } 00:28:43.054 ] 00:28:43.054 }' 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:43.054 18:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.313 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.313 [2024-12-06 18:27:14.259454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:43.572 "name": "Existed_Raid", 00:28:43.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.572 "strip_size_kb": 0, 00:28:43.572 "state": "configuring", 00:28:43.572 "raid_level": "raid1", 00:28:43.572 "superblock": false, 00:28:43.572 "num_base_bdevs": 3, 00:28:43.572 "num_base_bdevs_discovered": 1, 00:28:43.572 "num_base_bdevs_operational": 3, 00:28:43.572 "base_bdevs_list": [ 00:28:43.572 { 00:28:43.572 "name": "BaseBdev1", 00:28:43.572 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:43.572 "is_configured": true, 00:28:43.572 "data_offset": 0, 00:28:43.572 "data_size": 65536 00:28:43.572 }, 00:28:43.572 { 00:28:43.572 "name": null, 00:28:43.572 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:43.572 "is_configured": false, 00:28:43.572 "data_offset": 0, 00:28:43.572 "data_size": 65536 00:28:43.572 }, 00:28:43.572 { 00:28:43.572 "name": null, 00:28:43.572 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:43.572 "is_configured": false, 00:28:43.572 "data_offset": 0, 00:28:43.572 "data_size": 65536 00:28:43.572 } 00:28:43.572 ] 00:28:43.572 }' 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:43.572 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.831 [2024-12-06 18:27:14.691598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:43.831 "name": "Existed_Raid", 00:28:43.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:43.831 "strip_size_kb": 0, 00:28:43.831 "state": "configuring", 00:28:43.831 "raid_level": "raid1", 00:28:43.831 "superblock": false, 00:28:43.831 "num_base_bdevs": 3, 00:28:43.831 "num_base_bdevs_discovered": 2, 00:28:43.831 "num_base_bdevs_operational": 3, 00:28:43.831 "base_bdevs_list": [ 00:28:43.831 { 00:28:43.831 "name": "BaseBdev1", 00:28:43.831 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:43.831 "is_configured": true, 00:28:43.831 "data_offset": 0, 00:28:43.831 "data_size": 65536 00:28:43.831 }, 00:28:43.831 { 00:28:43.831 "name": null, 00:28:43.831 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:43.831 "is_configured": false, 00:28:43.831 "data_offset": 0, 00:28:43.831 "data_size": 65536 00:28:43.831 }, 00:28:43.831 { 00:28:43.831 "name": "BaseBdev3", 00:28:43.831 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:43.831 "is_configured": true, 00:28:43.831 "data_offset": 0, 00:28:43.831 "data_size": 65536 00:28:43.831 } 00:28:43.831 ] 00:28:43.831 }' 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:43.831 18:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.397 [2024-12-06 18:27:15.171314] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.397 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:44.397 "name": "Existed_Raid", 00:28:44.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:44.398 "strip_size_kb": 0, 00:28:44.398 "state": "configuring", 00:28:44.398 "raid_level": "raid1", 00:28:44.398 "superblock": false, 00:28:44.398 "num_base_bdevs": 3, 00:28:44.398 "num_base_bdevs_discovered": 1, 00:28:44.398 "num_base_bdevs_operational": 3, 00:28:44.398 "base_bdevs_list": [ 00:28:44.398 { 00:28:44.398 "name": null, 00:28:44.398 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:44.398 "is_configured": false, 00:28:44.398 "data_offset": 0, 00:28:44.398 "data_size": 65536 00:28:44.398 }, 00:28:44.398 { 00:28:44.398 "name": null, 00:28:44.398 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:44.398 "is_configured": false, 00:28:44.398 "data_offset": 0, 00:28:44.398 "data_size": 65536 00:28:44.398 }, 00:28:44.398 { 00:28:44.398 "name": "BaseBdev3", 00:28:44.398 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:44.398 "is_configured": true, 00:28:44.398 "data_offset": 0, 00:28:44.398 "data_size": 65536 00:28:44.398 } 00:28:44.398 ] 00:28:44.398 }' 00:28:44.398 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:44.398 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.962 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.962 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.962 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.963 [2024-12-06 18:27:15.741800] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:44.963 "name": "Existed_Raid", 00:28:44.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:44.963 "strip_size_kb": 0, 00:28:44.963 "state": "configuring", 00:28:44.963 "raid_level": "raid1", 00:28:44.963 "superblock": false, 00:28:44.963 "num_base_bdevs": 3, 00:28:44.963 "num_base_bdevs_discovered": 2, 00:28:44.963 "num_base_bdevs_operational": 3, 00:28:44.963 "base_bdevs_list": [ 00:28:44.963 { 00:28:44.963 "name": null, 00:28:44.963 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:44.963 "is_configured": false, 00:28:44.963 "data_offset": 0, 00:28:44.963 "data_size": 65536 00:28:44.963 }, 00:28:44.963 { 00:28:44.963 "name": "BaseBdev2", 00:28:44.963 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:44.963 "is_configured": true, 00:28:44.963 "data_offset": 0, 00:28:44.963 "data_size": 65536 00:28:44.963 }, 00:28:44.963 { 00:28:44.963 "name": "BaseBdev3", 00:28:44.963 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:44.963 "is_configured": true, 00:28:44.963 "data_offset": 0, 00:28:44.963 "data_size": 65536 00:28:44.963 } 00:28:44.963 ] 00:28:44.963 }' 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:44.963 18:27:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.220 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7b0ca7ab-f91c-428a-9f54-09479324be8b 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.478 NewBaseBdev 00:28:45.478 [2024-12-06 18:27:16.248081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:45.478 [2024-12-06 18:27:16.248131] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:45.478 [2024-12-06 18:27:16.248140] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:28:45.478 [2024-12-06 18:27:16.248420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:45.478 [2024-12-06 18:27:16.248570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:45.478 [2024-12-06 18:27:16.248583] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:45.478 [2024-12-06 18:27:16.248834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.478 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.478 [ 00:28:45.478 { 00:28:45.478 "name": "NewBaseBdev", 00:28:45.478 "aliases": [ 00:28:45.478 "7b0ca7ab-f91c-428a-9f54-09479324be8b" 00:28:45.478 ], 00:28:45.478 "product_name": "Malloc disk", 00:28:45.478 "block_size": 512, 00:28:45.478 "num_blocks": 65536, 00:28:45.478 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:45.478 "assigned_rate_limits": { 00:28:45.478 "rw_ios_per_sec": 0, 00:28:45.478 "rw_mbytes_per_sec": 0, 00:28:45.478 "r_mbytes_per_sec": 0, 00:28:45.478 "w_mbytes_per_sec": 0 00:28:45.478 }, 00:28:45.478 "claimed": true, 00:28:45.478 "claim_type": "exclusive_write", 00:28:45.478 "zoned": false, 00:28:45.478 "supported_io_types": { 00:28:45.478 "read": true, 00:28:45.478 "write": true, 00:28:45.478 "unmap": true, 00:28:45.478 "flush": true, 00:28:45.478 "reset": true, 00:28:45.478 "nvme_admin": false, 00:28:45.478 "nvme_io": false, 00:28:45.478 "nvme_io_md": false, 00:28:45.478 "write_zeroes": true, 00:28:45.478 "zcopy": true, 00:28:45.478 "get_zone_info": false, 00:28:45.478 "zone_management": false, 00:28:45.478 "zone_append": false, 00:28:45.478 "compare": false, 00:28:45.478 "compare_and_write": false, 00:28:45.478 "abort": true, 00:28:45.478 "seek_hole": false, 00:28:45.478 "seek_data": false, 00:28:45.478 "copy": true, 00:28:45.478 "nvme_iov_md": false 00:28:45.478 }, 00:28:45.478 "memory_domains": [ 00:28:45.478 { 00:28:45.478 "dma_device_id": "system", 00:28:45.479 "dma_device_type": 1 00:28:45.479 }, 00:28:45.479 { 00:28:45.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:45.479 "dma_device_type": 2 00:28:45.479 } 00:28:45.479 ], 00:28:45.479 "driver_specific": {} 00:28:45.479 } 00:28:45.479 ] 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:45.479 "name": "Existed_Raid", 00:28:45.479 "uuid": "c9f19b5f-e100-4502-90dd-07f0bc42d089", 00:28:45.479 "strip_size_kb": 0, 00:28:45.479 "state": "online", 00:28:45.479 "raid_level": "raid1", 00:28:45.479 "superblock": false, 00:28:45.479 "num_base_bdevs": 3, 00:28:45.479 "num_base_bdevs_discovered": 3, 00:28:45.479 "num_base_bdevs_operational": 3, 00:28:45.479 "base_bdevs_list": [ 00:28:45.479 { 00:28:45.479 "name": "NewBaseBdev", 00:28:45.479 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:45.479 "is_configured": true, 00:28:45.479 "data_offset": 0, 00:28:45.479 "data_size": 65536 00:28:45.479 }, 00:28:45.479 { 00:28:45.479 "name": "BaseBdev2", 00:28:45.479 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:45.479 "is_configured": true, 00:28:45.479 "data_offset": 0, 00:28:45.479 "data_size": 65536 00:28:45.479 }, 00:28:45.479 { 00:28:45.479 "name": "BaseBdev3", 00:28:45.479 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:45.479 "is_configured": true, 00:28:45.479 "data_offset": 0, 00:28:45.479 "data_size": 65536 00:28:45.479 } 00:28:45.479 ] 00:28:45.479 }' 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:45.479 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.044 [2024-12-06 18:27:16.699897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:46.044 "name": "Existed_Raid", 00:28:46.044 "aliases": [ 00:28:46.044 "c9f19b5f-e100-4502-90dd-07f0bc42d089" 00:28:46.044 ], 00:28:46.044 "product_name": "Raid Volume", 00:28:46.044 "block_size": 512, 00:28:46.044 "num_blocks": 65536, 00:28:46.044 "uuid": "c9f19b5f-e100-4502-90dd-07f0bc42d089", 00:28:46.044 "assigned_rate_limits": { 00:28:46.044 "rw_ios_per_sec": 0, 00:28:46.044 "rw_mbytes_per_sec": 0, 00:28:46.044 "r_mbytes_per_sec": 0, 00:28:46.044 "w_mbytes_per_sec": 0 00:28:46.044 }, 00:28:46.044 "claimed": false, 00:28:46.044 "zoned": false, 00:28:46.044 "supported_io_types": { 00:28:46.044 "read": true, 00:28:46.044 "write": true, 00:28:46.044 "unmap": false, 00:28:46.044 "flush": false, 00:28:46.044 "reset": true, 00:28:46.044 "nvme_admin": false, 00:28:46.044 "nvme_io": false, 00:28:46.044 "nvme_io_md": false, 00:28:46.044 "write_zeroes": true, 00:28:46.044 "zcopy": false, 00:28:46.044 "get_zone_info": false, 00:28:46.044 "zone_management": false, 00:28:46.044 "zone_append": false, 00:28:46.044 "compare": false, 00:28:46.044 "compare_and_write": false, 00:28:46.044 "abort": false, 00:28:46.044 "seek_hole": false, 00:28:46.044 "seek_data": false, 00:28:46.044 "copy": false, 00:28:46.044 "nvme_iov_md": false 00:28:46.044 }, 00:28:46.044 "memory_domains": [ 00:28:46.044 { 00:28:46.044 "dma_device_id": "system", 00:28:46.044 "dma_device_type": 1 00:28:46.044 }, 00:28:46.044 { 00:28:46.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:46.044 "dma_device_type": 2 00:28:46.044 }, 00:28:46.044 { 00:28:46.044 "dma_device_id": "system", 00:28:46.044 "dma_device_type": 1 00:28:46.044 }, 00:28:46.044 { 00:28:46.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:46.044 "dma_device_type": 2 00:28:46.044 }, 00:28:46.044 { 00:28:46.044 "dma_device_id": "system", 00:28:46.044 "dma_device_type": 1 00:28:46.044 }, 00:28:46.044 { 00:28:46.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:46.044 "dma_device_type": 2 00:28:46.044 } 00:28:46.044 ], 00:28:46.044 "driver_specific": { 00:28:46.044 "raid": { 00:28:46.044 "uuid": "c9f19b5f-e100-4502-90dd-07f0bc42d089", 00:28:46.044 "strip_size_kb": 0, 00:28:46.044 "state": "online", 00:28:46.044 "raid_level": "raid1", 00:28:46.044 "superblock": false, 00:28:46.044 "num_base_bdevs": 3, 00:28:46.044 "num_base_bdevs_discovered": 3, 00:28:46.044 "num_base_bdevs_operational": 3, 00:28:46.044 "base_bdevs_list": [ 00:28:46.044 { 00:28:46.044 "name": "NewBaseBdev", 00:28:46.044 "uuid": "7b0ca7ab-f91c-428a-9f54-09479324be8b", 00:28:46.044 "is_configured": true, 00:28:46.044 "data_offset": 0, 00:28:46.044 "data_size": 65536 00:28:46.044 }, 00:28:46.044 { 00:28:46.044 "name": "BaseBdev2", 00:28:46.044 "uuid": "6d9f0f01-bf0a-47b9-8f90-ef92b4d24645", 00:28:46.044 "is_configured": true, 00:28:46.044 "data_offset": 0, 00:28:46.044 "data_size": 65536 00:28:46.044 }, 00:28:46.044 { 00:28:46.044 "name": "BaseBdev3", 00:28:46.044 "uuid": "35b029cd-1c84-4c76-9d62-a114944aa56f", 00:28:46.044 "is_configured": true, 00:28:46.044 "data_offset": 0, 00:28:46.044 "data_size": 65536 00:28:46.044 } 00:28:46.044 ] 00:28:46.044 } 00:28:46.044 } 00:28:46.044 }' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:46.044 BaseBdev2 00:28:46.044 BaseBdev3' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:46.044 [2024-12-06 18:27:16.967258] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:46.044 [2024-12-06 18:27:16.967301] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:46.044 [2024-12-06 18:27:16.967381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:46.044 [2024-12-06 18:27:16.967667] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:46.044 [2024-12-06 18:27:16.967681] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67138 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67138 ']' 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67138 00:28:46.044 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:28:46.045 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.045 18:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67138 00:28:46.301 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.301 killing process with pid 67138 00:28:46.301 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.301 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67138' 00:28:46.301 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67138 00:28:46.301 [2024-12-06 18:27:17.020370] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:46.301 18:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67138 00:28:46.559 [2024-12-06 18:27:17.327307] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:28:47.935 00:28:47.935 real 0m10.383s 00:28:47.935 user 0m16.408s 00:28:47.935 sys 0m2.134s 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:28:47.935 ************************************ 00:28:47.935 END TEST raid_state_function_test 00:28:47.935 ************************************ 00:28:47.935 18:27:18 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:28:47.935 18:27:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:28:47.935 18:27:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:47.935 18:27:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:47.935 ************************************ 00:28:47.935 START TEST raid_state_function_test_sb 00:28:47.935 ************************************ 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67759 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:28:47.935 Process raid pid: 67759 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67759' 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67759 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67759 ']' 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.935 18:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:47.935 [2024-12-06 18:27:18.651902] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:47.935 [2024-12-06 18:27:18.652032] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.935 [2024-12-06 18:27:18.834106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.194 [2024-12-06 18:27:18.953753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.453 [2024-12-06 18:27:19.168893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:48.453 [2024-12-06 18:27:19.168949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:48.712 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.712 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:28:48.712 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:48.712 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.712 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.712 [2024-12-06 18:27:19.500628] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:48.713 [2024-12-06 18:27:19.500690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:48.713 [2024-12-06 18:27:19.500708] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:48.713 [2024-12-06 18:27:19.500722] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:48.713 [2024-12-06 18:27:19.500730] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:48.713 [2024-12-06 18:27:19.500742] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:48.713 "name": "Existed_Raid", 00:28:48.713 "uuid": "d13a5643-b793-42a1-b1e6-f4fd2ec03c47", 00:28:48.713 "strip_size_kb": 0, 00:28:48.713 "state": "configuring", 00:28:48.713 "raid_level": "raid1", 00:28:48.713 "superblock": true, 00:28:48.713 "num_base_bdevs": 3, 00:28:48.713 "num_base_bdevs_discovered": 0, 00:28:48.713 "num_base_bdevs_operational": 3, 00:28:48.713 "base_bdevs_list": [ 00:28:48.713 { 00:28:48.713 "name": "BaseBdev1", 00:28:48.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.713 "is_configured": false, 00:28:48.713 "data_offset": 0, 00:28:48.713 "data_size": 0 00:28:48.713 }, 00:28:48.713 { 00:28:48.713 "name": "BaseBdev2", 00:28:48.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.713 "is_configured": false, 00:28:48.713 "data_offset": 0, 00:28:48.713 "data_size": 0 00:28:48.713 }, 00:28:48.713 { 00:28:48.713 "name": "BaseBdev3", 00:28:48.713 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:48.713 "is_configured": false, 00:28:48.713 "data_offset": 0, 00:28:48.713 "data_size": 0 00:28:48.713 } 00:28:48.713 ] 00:28:48.713 }' 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:48.713 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.286 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.287 [2024-12-06 18:27:19.943968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:49.287 [2024-12-06 18:27:19.944016] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.287 [2024-12-06 18:27:19.955958] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:49.287 [2024-12-06 18:27:19.956014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:49.287 [2024-12-06 18:27:19.956025] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:49.287 [2024-12-06 18:27:19.956038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:49.287 [2024-12-06 18:27:19.956045] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:49.287 [2024-12-06 18:27:19.956057] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.287 18:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.287 [2024-12-06 18:27:20.004877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:49.287 BaseBdev1 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.287 [ 00:28:49.287 { 00:28:49.287 "name": "BaseBdev1", 00:28:49.287 "aliases": [ 00:28:49.287 "f620c59e-94b1-49cc-9935-c98cd347f44b" 00:28:49.287 ], 00:28:49.287 "product_name": "Malloc disk", 00:28:49.287 "block_size": 512, 00:28:49.287 "num_blocks": 65536, 00:28:49.287 "uuid": "f620c59e-94b1-49cc-9935-c98cd347f44b", 00:28:49.287 "assigned_rate_limits": { 00:28:49.287 "rw_ios_per_sec": 0, 00:28:49.287 "rw_mbytes_per_sec": 0, 00:28:49.287 "r_mbytes_per_sec": 0, 00:28:49.287 "w_mbytes_per_sec": 0 00:28:49.287 }, 00:28:49.287 "claimed": true, 00:28:49.287 "claim_type": "exclusive_write", 00:28:49.287 "zoned": false, 00:28:49.287 "supported_io_types": { 00:28:49.287 "read": true, 00:28:49.287 "write": true, 00:28:49.287 "unmap": true, 00:28:49.287 "flush": true, 00:28:49.287 "reset": true, 00:28:49.287 "nvme_admin": false, 00:28:49.287 "nvme_io": false, 00:28:49.287 "nvme_io_md": false, 00:28:49.287 "write_zeroes": true, 00:28:49.287 "zcopy": true, 00:28:49.287 "get_zone_info": false, 00:28:49.287 "zone_management": false, 00:28:49.287 "zone_append": false, 00:28:49.287 "compare": false, 00:28:49.287 "compare_and_write": false, 00:28:49.287 "abort": true, 00:28:49.287 "seek_hole": false, 00:28:49.287 "seek_data": false, 00:28:49.287 "copy": true, 00:28:49.287 "nvme_iov_md": false 00:28:49.287 }, 00:28:49.287 "memory_domains": [ 00:28:49.287 { 00:28:49.287 "dma_device_id": "system", 00:28:49.287 "dma_device_type": 1 00:28:49.287 }, 00:28:49.287 { 00:28:49.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:49.287 "dma_device_type": 2 00:28:49.287 } 00:28:49.287 ], 00:28:49.287 "driver_specific": {} 00:28:49.287 } 00:28:49.287 ] 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:49.287 "name": "Existed_Raid", 00:28:49.287 "uuid": "806207f5-377b-424c-bb56-d04f0c483be1", 00:28:49.287 "strip_size_kb": 0, 00:28:49.287 "state": "configuring", 00:28:49.287 "raid_level": "raid1", 00:28:49.287 "superblock": true, 00:28:49.287 "num_base_bdevs": 3, 00:28:49.287 "num_base_bdevs_discovered": 1, 00:28:49.287 "num_base_bdevs_operational": 3, 00:28:49.287 "base_bdevs_list": [ 00:28:49.287 { 00:28:49.287 "name": "BaseBdev1", 00:28:49.287 "uuid": "f620c59e-94b1-49cc-9935-c98cd347f44b", 00:28:49.287 "is_configured": true, 00:28:49.287 "data_offset": 2048, 00:28:49.287 "data_size": 63488 00:28:49.287 }, 00:28:49.287 { 00:28:49.287 "name": "BaseBdev2", 00:28:49.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.287 "is_configured": false, 00:28:49.287 "data_offset": 0, 00:28:49.287 "data_size": 0 00:28:49.287 }, 00:28:49.287 { 00:28:49.287 "name": "BaseBdev3", 00:28:49.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.287 "is_configured": false, 00:28:49.287 "data_offset": 0, 00:28:49.287 "data_size": 0 00:28:49.287 } 00:28:49.287 ] 00:28:49.287 }' 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:49.287 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.547 [2024-12-06 18:27:20.448309] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:49.547 [2024-12-06 18:27:20.448369] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.547 [2024-12-06 18:27:20.460364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:49.547 [2024-12-06 18:27:20.462474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:28:49.547 [2024-12-06 18:27:20.462523] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:28:49.547 [2024-12-06 18:27:20.462535] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:28:49.547 [2024-12-06 18:27:20.462547] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.547 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:49.805 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.805 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:49.805 "name": "Existed_Raid", 00:28:49.805 "uuid": "8eb54b23-f7a0-4ef8-af0b-af2528370121", 00:28:49.805 "strip_size_kb": 0, 00:28:49.805 "state": "configuring", 00:28:49.805 "raid_level": "raid1", 00:28:49.805 "superblock": true, 00:28:49.805 "num_base_bdevs": 3, 00:28:49.805 "num_base_bdevs_discovered": 1, 00:28:49.805 "num_base_bdevs_operational": 3, 00:28:49.805 "base_bdevs_list": [ 00:28:49.805 { 00:28:49.805 "name": "BaseBdev1", 00:28:49.805 "uuid": "f620c59e-94b1-49cc-9935-c98cd347f44b", 00:28:49.805 "is_configured": true, 00:28:49.805 "data_offset": 2048, 00:28:49.805 "data_size": 63488 00:28:49.805 }, 00:28:49.805 { 00:28:49.805 "name": "BaseBdev2", 00:28:49.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.805 "is_configured": false, 00:28:49.805 "data_offset": 0, 00:28:49.805 "data_size": 0 00:28:49.805 }, 00:28:49.805 { 00:28:49.805 "name": "BaseBdev3", 00:28:49.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:49.805 "is_configured": false, 00:28:49.805 "data_offset": 0, 00:28:49.805 "data_size": 0 00:28:49.805 } 00:28:49.805 ] 00:28:49.805 }' 00:28:49.805 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:49.805 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.065 [2024-12-06 18:27:20.870677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:50.065 BaseBdev2 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.065 [ 00:28:50.065 { 00:28:50.065 "name": "BaseBdev2", 00:28:50.065 "aliases": [ 00:28:50.065 "e38c27d9-38e1-4325-8e80-fa6eb45b6935" 00:28:50.065 ], 00:28:50.065 "product_name": "Malloc disk", 00:28:50.065 "block_size": 512, 00:28:50.065 "num_blocks": 65536, 00:28:50.065 "uuid": "e38c27d9-38e1-4325-8e80-fa6eb45b6935", 00:28:50.065 "assigned_rate_limits": { 00:28:50.065 "rw_ios_per_sec": 0, 00:28:50.065 "rw_mbytes_per_sec": 0, 00:28:50.065 "r_mbytes_per_sec": 0, 00:28:50.065 "w_mbytes_per_sec": 0 00:28:50.065 }, 00:28:50.065 "claimed": true, 00:28:50.065 "claim_type": "exclusive_write", 00:28:50.065 "zoned": false, 00:28:50.065 "supported_io_types": { 00:28:50.065 "read": true, 00:28:50.065 "write": true, 00:28:50.065 "unmap": true, 00:28:50.065 "flush": true, 00:28:50.065 "reset": true, 00:28:50.065 "nvme_admin": false, 00:28:50.065 "nvme_io": false, 00:28:50.065 "nvme_io_md": false, 00:28:50.065 "write_zeroes": true, 00:28:50.065 "zcopy": true, 00:28:50.065 "get_zone_info": false, 00:28:50.065 "zone_management": false, 00:28:50.065 "zone_append": false, 00:28:50.065 "compare": false, 00:28:50.065 "compare_and_write": false, 00:28:50.065 "abort": true, 00:28:50.065 "seek_hole": false, 00:28:50.065 "seek_data": false, 00:28:50.065 "copy": true, 00:28:50.065 "nvme_iov_md": false 00:28:50.065 }, 00:28:50.065 "memory_domains": [ 00:28:50.065 { 00:28:50.065 "dma_device_id": "system", 00:28:50.065 "dma_device_type": 1 00:28:50.065 }, 00:28:50.065 { 00:28:50.065 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:50.065 "dma_device_type": 2 00:28:50.065 } 00:28:50.065 ], 00:28:50.065 "driver_specific": {} 00:28:50.065 } 00:28:50.065 ] 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.065 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:50.065 "name": "Existed_Raid", 00:28:50.065 "uuid": "8eb54b23-f7a0-4ef8-af0b-af2528370121", 00:28:50.065 "strip_size_kb": 0, 00:28:50.065 "state": "configuring", 00:28:50.065 "raid_level": "raid1", 00:28:50.065 "superblock": true, 00:28:50.065 "num_base_bdevs": 3, 00:28:50.065 "num_base_bdevs_discovered": 2, 00:28:50.065 "num_base_bdevs_operational": 3, 00:28:50.065 "base_bdevs_list": [ 00:28:50.065 { 00:28:50.066 "name": "BaseBdev1", 00:28:50.066 "uuid": "f620c59e-94b1-49cc-9935-c98cd347f44b", 00:28:50.066 "is_configured": true, 00:28:50.066 "data_offset": 2048, 00:28:50.066 "data_size": 63488 00:28:50.066 }, 00:28:50.066 { 00:28:50.066 "name": "BaseBdev2", 00:28:50.066 "uuid": "e38c27d9-38e1-4325-8e80-fa6eb45b6935", 00:28:50.066 "is_configured": true, 00:28:50.066 "data_offset": 2048, 00:28:50.066 "data_size": 63488 00:28:50.066 }, 00:28:50.066 { 00:28:50.066 "name": "BaseBdev3", 00:28:50.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:50.066 "is_configured": false, 00:28:50.066 "data_offset": 0, 00:28:50.066 "data_size": 0 00:28:50.066 } 00:28:50.066 ] 00:28:50.066 }' 00:28:50.066 18:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:50.066 18:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.635 [2024-12-06 18:27:21.388908] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:50.635 [2024-12-06 18:27:21.389214] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:28:50.635 [2024-12-06 18:27:21.389238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:50.635 [2024-12-06 18:27:21.389517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:50.635 [2024-12-06 18:27:21.389713] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:28:50.635 [2024-12-06 18:27:21.389724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:28:50.635 BaseBdev3 00:28:50.635 [2024-12-06 18:27:21.389908] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.635 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.635 [ 00:28:50.635 { 00:28:50.635 "name": "BaseBdev3", 00:28:50.635 "aliases": [ 00:28:50.635 "bcba170c-f55b-4331-b9b1-5736fd5e1eb0" 00:28:50.635 ], 00:28:50.635 "product_name": "Malloc disk", 00:28:50.635 "block_size": 512, 00:28:50.635 "num_blocks": 65536, 00:28:50.635 "uuid": "bcba170c-f55b-4331-b9b1-5736fd5e1eb0", 00:28:50.635 "assigned_rate_limits": { 00:28:50.635 "rw_ios_per_sec": 0, 00:28:50.635 "rw_mbytes_per_sec": 0, 00:28:50.635 "r_mbytes_per_sec": 0, 00:28:50.635 "w_mbytes_per_sec": 0 00:28:50.635 }, 00:28:50.635 "claimed": true, 00:28:50.635 "claim_type": "exclusive_write", 00:28:50.635 "zoned": false, 00:28:50.635 "supported_io_types": { 00:28:50.635 "read": true, 00:28:50.635 "write": true, 00:28:50.635 "unmap": true, 00:28:50.635 "flush": true, 00:28:50.635 "reset": true, 00:28:50.635 "nvme_admin": false, 00:28:50.635 "nvme_io": false, 00:28:50.635 "nvme_io_md": false, 00:28:50.635 "write_zeroes": true, 00:28:50.635 "zcopy": true, 00:28:50.636 "get_zone_info": false, 00:28:50.636 "zone_management": false, 00:28:50.636 "zone_append": false, 00:28:50.636 "compare": false, 00:28:50.636 "compare_and_write": false, 00:28:50.636 "abort": true, 00:28:50.636 "seek_hole": false, 00:28:50.636 "seek_data": false, 00:28:50.636 "copy": true, 00:28:50.636 "nvme_iov_md": false 00:28:50.636 }, 00:28:50.636 "memory_domains": [ 00:28:50.636 { 00:28:50.636 "dma_device_id": "system", 00:28:50.636 "dma_device_type": 1 00:28:50.636 }, 00:28:50.636 { 00:28:50.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:50.636 "dma_device_type": 2 00:28:50.636 } 00:28:50.636 ], 00:28:50.636 "driver_specific": {} 00:28:50.636 } 00:28:50.636 ] 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:50.636 "name": "Existed_Raid", 00:28:50.636 "uuid": "8eb54b23-f7a0-4ef8-af0b-af2528370121", 00:28:50.636 "strip_size_kb": 0, 00:28:50.636 "state": "online", 00:28:50.636 "raid_level": "raid1", 00:28:50.636 "superblock": true, 00:28:50.636 "num_base_bdevs": 3, 00:28:50.636 "num_base_bdevs_discovered": 3, 00:28:50.636 "num_base_bdevs_operational": 3, 00:28:50.636 "base_bdevs_list": [ 00:28:50.636 { 00:28:50.636 "name": "BaseBdev1", 00:28:50.636 "uuid": "f620c59e-94b1-49cc-9935-c98cd347f44b", 00:28:50.636 "is_configured": true, 00:28:50.636 "data_offset": 2048, 00:28:50.636 "data_size": 63488 00:28:50.636 }, 00:28:50.636 { 00:28:50.636 "name": "BaseBdev2", 00:28:50.636 "uuid": "e38c27d9-38e1-4325-8e80-fa6eb45b6935", 00:28:50.636 "is_configured": true, 00:28:50.636 "data_offset": 2048, 00:28:50.636 "data_size": 63488 00:28:50.636 }, 00:28:50.636 { 00:28:50.636 "name": "BaseBdev3", 00:28:50.636 "uuid": "bcba170c-f55b-4331-b9b1-5736fd5e1eb0", 00:28:50.636 "is_configured": true, 00:28:50.636 "data_offset": 2048, 00:28:50.636 "data_size": 63488 00:28:50.636 } 00:28:50.636 ] 00:28:50.636 }' 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:50.636 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:50.895 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.154 [2024-12-06 18:27:21.852595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:51.154 "name": "Existed_Raid", 00:28:51.154 "aliases": [ 00:28:51.154 "8eb54b23-f7a0-4ef8-af0b-af2528370121" 00:28:51.154 ], 00:28:51.154 "product_name": "Raid Volume", 00:28:51.154 "block_size": 512, 00:28:51.154 "num_blocks": 63488, 00:28:51.154 "uuid": "8eb54b23-f7a0-4ef8-af0b-af2528370121", 00:28:51.154 "assigned_rate_limits": { 00:28:51.154 "rw_ios_per_sec": 0, 00:28:51.154 "rw_mbytes_per_sec": 0, 00:28:51.154 "r_mbytes_per_sec": 0, 00:28:51.154 "w_mbytes_per_sec": 0 00:28:51.154 }, 00:28:51.154 "claimed": false, 00:28:51.154 "zoned": false, 00:28:51.154 "supported_io_types": { 00:28:51.154 "read": true, 00:28:51.154 "write": true, 00:28:51.154 "unmap": false, 00:28:51.154 "flush": false, 00:28:51.154 "reset": true, 00:28:51.154 "nvme_admin": false, 00:28:51.154 "nvme_io": false, 00:28:51.154 "nvme_io_md": false, 00:28:51.154 "write_zeroes": true, 00:28:51.154 "zcopy": false, 00:28:51.154 "get_zone_info": false, 00:28:51.154 "zone_management": false, 00:28:51.154 "zone_append": false, 00:28:51.154 "compare": false, 00:28:51.154 "compare_and_write": false, 00:28:51.154 "abort": false, 00:28:51.154 "seek_hole": false, 00:28:51.154 "seek_data": false, 00:28:51.154 "copy": false, 00:28:51.154 "nvme_iov_md": false 00:28:51.154 }, 00:28:51.154 "memory_domains": [ 00:28:51.154 { 00:28:51.154 "dma_device_id": "system", 00:28:51.154 "dma_device_type": 1 00:28:51.154 }, 00:28:51.154 { 00:28:51.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:51.154 "dma_device_type": 2 00:28:51.154 }, 00:28:51.154 { 00:28:51.154 "dma_device_id": "system", 00:28:51.154 "dma_device_type": 1 00:28:51.154 }, 00:28:51.154 { 00:28:51.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:51.154 "dma_device_type": 2 00:28:51.154 }, 00:28:51.154 { 00:28:51.154 "dma_device_id": "system", 00:28:51.154 "dma_device_type": 1 00:28:51.154 }, 00:28:51.154 { 00:28:51.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:51.154 "dma_device_type": 2 00:28:51.154 } 00:28:51.154 ], 00:28:51.154 "driver_specific": { 00:28:51.154 "raid": { 00:28:51.154 "uuid": "8eb54b23-f7a0-4ef8-af0b-af2528370121", 00:28:51.154 "strip_size_kb": 0, 00:28:51.154 "state": "online", 00:28:51.154 "raid_level": "raid1", 00:28:51.154 "superblock": true, 00:28:51.154 "num_base_bdevs": 3, 00:28:51.154 "num_base_bdevs_discovered": 3, 00:28:51.154 "num_base_bdevs_operational": 3, 00:28:51.154 "base_bdevs_list": [ 00:28:51.154 { 00:28:51.154 "name": "BaseBdev1", 00:28:51.154 "uuid": "f620c59e-94b1-49cc-9935-c98cd347f44b", 00:28:51.154 "is_configured": true, 00:28:51.154 "data_offset": 2048, 00:28:51.154 "data_size": 63488 00:28:51.154 }, 00:28:51.154 { 00:28:51.154 "name": "BaseBdev2", 00:28:51.154 "uuid": "e38c27d9-38e1-4325-8e80-fa6eb45b6935", 00:28:51.154 "is_configured": true, 00:28:51.154 "data_offset": 2048, 00:28:51.154 "data_size": 63488 00:28:51.154 }, 00:28:51.154 { 00:28:51.154 "name": "BaseBdev3", 00:28:51.154 "uuid": "bcba170c-f55b-4331-b9b1-5736fd5e1eb0", 00:28:51.154 "is_configured": true, 00:28:51.154 "data_offset": 2048, 00:28:51.154 "data_size": 63488 00:28:51.154 } 00:28:51.154 ] 00:28:51.154 } 00:28:51.154 } 00:28:51.154 }' 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:28:51.154 BaseBdev2 00:28:51.154 BaseBdev3' 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:51.154 18:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.154 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.414 [2024-12-06 18:27:22.116023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:51.414 "name": "Existed_Raid", 00:28:51.414 "uuid": "8eb54b23-f7a0-4ef8-af0b-af2528370121", 00:28:51.414 "strip_size_kb": 0, 00:28:51.414 "state": "online", 00:28:51.414 "raid_level": "raid1", 00:28:51.414 "superblock": true, 00:28:51.414 "num_base_bdevs": 3, 00:28:51.414 "num_base_bdevs_discovered": 2, 00:28:51.414 "num_base_bdevs_operational": 2, 00:28:51.414 "base_bdevs_list": [ 00:28:51.414 { 00:28:51.414 "name": null, 00:28:51.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:51.414 "is_configured": false, 00:28:51.414 "data_offset": 0, 00:28:51.414 "data_size": 63488 00:28:51.414 }, 00:28:51.414 { 00:28:51.414 "name": "BaseBdev2", 00:28:51.414 "uuid": "e38c27d9-38e1-4325-8e80-fa6eb45b6935", 00:28:51.414 "is_configured": true, 00:28:51.414 "data_offset": 2048, 00:28:51.414 "data_size": 63488 00:28:51.414 }, 00:28:51.414 { 00:28:51.414 "name": "BaseBdev3", 00:28:51.414 "uuid": "bcba170c-f55b-4331-b9b1-5736fd5e1eb0", 00:28:51.414 "is_configured": true, 00:28:51.414 "data_offset": 2048, 00:28:51.414 "data_size": 63488 00:28:51.414 } 00:28:51.414 ] 00:28:51.414 }' 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:51.414 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 [2024-12-06 18:27:22.717010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.982 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:51.982 [2024-12-06 18:27:22.866361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:51.982 [2024-12-06 18:27:22.866465] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:52.242 [2024-12-06 18:27:22.963472] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:52.242 [2024-12-06 18:27:22.963528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:52.242 [2024-12-06 18:27:22.963543] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:28:52.242 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.242 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:28:52.242 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:28:52.242 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.242 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.242 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.242 18:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:28:52.242 18:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.242 BaseBdev2 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.242 [ 00:28:52.242 { 00:28:52.242 "name": "BaseBdev2", 00:28:52.242 "aliases": [ 00:28:52.242 "30e497ed-3fff-4611-91dc-a2ca75955ab8" 00:28:52.242 ], 00:28:52.242 "product_name": "Malloc disk", 00:28:52.242 "block_size": 512, 00:28:52.242 "num_blocks": 65536, 00:28:52.242 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:52.242 "assigned_rate_limits": { 00:28:52.242 "rw_ios_per_sec": 0, 00:28:52.242 "rw_mbytes_per_sec": 0, 00:28:52.242 "r_mbytes_per_sec": 0, 00:28:52.242 "w_mbytes_per_sec": 0 00:28:52.242 }, 00:28:52.242 "claimed": false, 00:28:52.242 "zoned": false, 00:28:52.242 "supported_io_types": { 00:28:52.242 "read": true, 00:28:52.242 "write": true, 00:28:52.242 "unmap": true, 00:28:52.242 "flush": true, 00:28:52.242 "reset": true, 00:28:52.242 "nvme_admin": false, 00:28:52.242 "nvme_io": false, 00:28:52.242 "nvme_io_md": false, 00:28:52.242 "write_zeroes": true, 00:28:52.242 "zcopy": true, 00:28:52.242 "get_zone_info": false, 00:28:52.242 "zone_management": false, 00:28:52.242 "zone_append": false, 00:28:52.242 "compare": false, 00:28:52.242 "compare_and_write": false, 00:28:52.242 "abort": true, 00:28:52.242 "seek_hole": false, 00:28:52.242 "seek_data": false, 00:28:52.242 "copy": true, 00:28:52.242 "nvme_iov_md": false 00:28:52.242 }, 00:28:52.242 "memory_domains": [ 00:28:52.242 { 00:28:52.242 "dma_device_id": "system", 00:28:52.242 "dma_device_type": 1 00:28:52.242 }, 00:28:52.242 { 00:28:52.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:52.242 "dma_device_type": 2 00:28:52.242 } 00:28:52.242 ], 00:28:52.242 "driver_specific": {} 00:28:52.242 } 00:28:52.242 ] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.242 BaseBdev3 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.242 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.242 [ 00:28:52.242 { 00:28:52.242 "name": "BaseBdev3", 00:28:52.242 "aliases": [ 00:28:52.242 "91712f31-2070-4c2c-85a2-e4b578bc78b6" 00:28:52.242 ], 00:28:52.242 "product_name": "Malloc disk", 00:28:52.242 "block_size": 512, 00:28:52.242 "num_blocks": 65536, 00:28:52.242 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:52.242 "assigned_rate_limits": { 00:28:52.242 "rw_ios_per_sec": 0, 00:28:52.242 "rw_mbytes_per_sec": 0, 00:28:52.242 "r_mbytes_per_sec": 0, 00:28:52.242 "w_mbytes_per_sec": 0 00:28:52.242 }, 00:28:52.242 "claimed": false, 00:28:52.242 "zoned": false, 00:28:52.242 "supported_io_types": { 00:28:52.242 "read": true, 00:28:52.242 "write": true, 00:28:52.242 "unmap": true, 00:28:52.242 "flush": true, 00:28:52.242 "reset": true, 00:28:52.242 "nvme_admin": false, 00:28:52.242 "nvme_io": false, 00:28:52.242 "nvme_io_md": false, 00:28:52.501 "write_zeroes": true, 00:28:52.501 "zcopy": true, 00:28:52.501 "get_zone_info": false, 00:28:52.501 "zone_management": false, 00:28:52.501 "zone_append": false, 00:28:52.501 "compare": false, 00:28:52.501 "compare_and_write": false, 00:28:52.501 "abort": true, 00:28:52.501 "seek_hole": false, 00:28:52.501 "seek_data": false, 00:28:52.501 "copy": true, 00:28:52.501 "nvme_iov_md": false 00:28:52.501 }, 00:28:52.501 "memory_domains": [ 00:28:52.501 { 00:28:52.501 "dma_device_id": "system", 00:28:52.501 "dma_device_type": 1 00:28:52.501 }, 00:28:52.501 { 00:28:52.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:52.501 "dma_device_type": 2 00:28:52.501 } 00:28:52.501 ], 00:28:52.501 "driver_specific": {} 00:28:52.501 } 00:28:52.501 ] 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.501 [2024-12-06 18:27:23.203084] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:28:52.501 [2024-12-06 18:27:23.203134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:28:52.501 [2024-12-06 18:27:23.203169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:52.501 [2024-12-06 18:27:23.205230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:52.501 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:52.502 "name": "Existed_Raid", 00:28:52.502 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:52.502 "strip_size_kb": 0, 00:28:52.502 "state": "configuring", 00:28:52.502 "raid_level": "raid1", 00:28:52.502 "superblock": true, 00:28:52.502 "num_base_bdevs": 3, 00:28:52.502 "num_base_bdevs_discovered": 2, 00:28:52.502 "num_base_bdevs_operational": 3, 00:28:52.502 "base_bdevs_list": [ 00:28:52.502 { 00:28:52.502 "name": "BaseBdev1", 00:28:52.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.502 "is_configured": false, 00:28:52.502 "data_offset": 0, 00:28:52.502 "data_size": 0 00:28:52.502 }, 00:28:52.502 { 00:28:52.502 "name": "BaseBdev2", 00:28:52.502 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:52.502 "is_configured": true, 00:28:52.502 "data_offset": 2048, 00:28:52.502 "data_size": 63488 00:28:52.502 }, 00:28:52.502 { 00:28:52.502 "name": "BaseBdev3", 00:28:52.502 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:52.502 "is_configured": true, 00:28:52.502 "data_offset": 2048, 00:28:52.502 "data_size": 63488 00:28:52.502 } 00:28:52.502 ] 00:28:52.502 }' 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:52.502 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.761 [2024-12-06 18:27:23.618550] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:52.761 "name": "Existed_Raid", 00:28:52.761 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:52.761 "strip_size_kb": 0, 00:28:52.761 "state": "configuring", 00:28:52.761 "raid_level": "raid1", 00:28:52.761 "superblock": true, 00:28:52.761 "num_base_bdevs": 3, 00:28:52.761 "num_base_bdevs_discovered": 1, 00:28:52.761 "num_base_bdevs_operational": 3, 00:28:52.761 "base_bdevs_list": [ 00:28:52.761 { 00:28:52.761 "name": "BaseBdev1", 00:28:52.761 "uuid": "00000000-0000-0000-0000-000000000000", 00:28:52.761 "is_configured": false, 00:28:52.761 "data_offset": 0, 00:28:52.761 "data_size": 0 00:28:52.761 }, 00:28:52.761 { 00:28:52.761 "name": null, 00:28:52.761 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:52.761 "is_configured": false, 00:28:52.761 "data_offset": 0, 00:28:52.761 "data_size": 63488 00:28:52.761 }, 00:28:52.761 { 00:28:52.761 "name": "BaseBdev3", 00:28:52.761 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:52.761 "is_configured": true, 00:28:52.761 "data_offset": 2048, 00:28:52.761 "data_size": 63488 00:28:52.761 } 00:28:52.761 ] 00:28:52.761 }' 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:52.761 18:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.328 [2024-12-06 18:27:24.131824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:28:53.328 BaseBdev1 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.328 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.328 [ 00:28:53.328 { 00:28:53.328 "name": "BaseBdev1", 00:28:53.328 "aliases": [ 00:28:53.328 "4dc00ec4-8bde-4675-8b50-e990f4deaacb" 00:28:53.328 ], 00:28:53.328 "product_name": "Malloc disk", 00:28:53.328 "block_size": 512, 00:28:53.328 "num_blocks": 65536, 00:28:53.328 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:53.328 "assigned_rate_limits": { 00:28:53.328 "rw_ios_per_sec": 0, 00:28:53.328 "rw_mbytes_per_sec": 0, 00:28:53.328 "r_mbytes_per_sec": 0, 00:28:53.328 "w_mbytes_per_sec": 0 00:28:53.328 }, 00:28:53.328 "claimed": true, 00:28:53.328 "claim_type": "exclusive_write", 00:28:53.328 "zoned": false, 00:28:53.328 "supported_io_types": { 00:28:53.328 "read": true, 00:28:53.328 "write": true, 00:28:53.328 "unmap": true, 00:28:53.328 "flush": true, 00:28:53.328 "reset": true, 00:28:53.328 "nvme_admin": false, 00:28:53.328 "nvme_io": false, 00:28:53.328 "nvme_io_md": false, 00:28:53.328 "write_zeroes": true, 00:28:53.328 "zcopy": true, 00:28:53.328 "get_zone_info": false, 00:28:53.328 "zone_management": false, 00:28:53.328 "zone_append": false, 00:28:53.328 "compare": false, 00:28:53.328 "compare_and_write": false, 00:28:53.328 "abort": true, 00:28:53.328 "seek_hole": false, 00:28:53.328 "seek_data": false, 00:28:53.328 "copy": true, 00:28:53.328 "nvme_iov_md": false 00:28:53.328 }, 00:28:53.328 "memory_domains": [ 00:28:53.328 { 00:28:53.328 "dma_device_id": "system", 00:28:53.328 "dma_device_type": 1 00:28:53.328 }, 00:28:53.328 { 00:28:53.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:53.328 "dma_device_type": 2 00:28:53.328 } 00:28:53.328 ], 00:28:53.329 "driver_specific": {} 00:28:53.329 } 00:28:53.329 ] 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:53.329 "name": "Existed_Raid", 00:28:53.329 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:53.329 "strip_size_kb": 0, 00:28:53.329 "state": "configuring", 00:28:53.329 "raid_level": "raid1", 00:28:53.329 "superblock": true, 00:28:53.329 "num_base_bdevs": 3, 00:28:53.329 "num_base_bdevs_discovered": 2, 00:28:53.329 "num_base_bdevs_operational": 3, 00:28:53.329 "base_bdevs_list": [ 00:28:53.329 { 00:28:53.329 "name": "BaseBdev1", 00:28:53.329 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:53.329 "is_configured": true, 00:28:53.329 "data_offset": 2048, 00:28:53.329 "data_size": 63488 00:28:53.329 }, 00:28:53.329 { 00:28:53.329 "name": null, 00:28:53.329 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:53.329 "is_configured": false, 00:28:53.329 "data_offset": 0, 00:28:53.329 "data_size": 63488 00:28:53.329 }, 00:28:53.329 { 00:28:53.329 "name": "BaseBdev3", 00:28:53.329 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:53.329 "is_configured": true, 00:28:53.329 "data_offset": 2048, 00:28:53.329 "data_size": 63488 00:28:53.329 } 00:28:53.329 ] 00:28:53.329 }' 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:53.329 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.588 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.588 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.588 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:53.588 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.846 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.847 [2024-12-06 18:27:24.563251] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:53.847 "name": "Existed_Raid", 00:28:53.847 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:53.847 "strip_size_kb": 0, 00:28:53.847 "state": "configuring", 00:28:53.847 "raid_level": "raid1", 00:28:53.847 "superblock": true, 00:28:53.847 "num_base_bdevs": 3, 00:28:53.847 "num_base_bdevs_discovered": 1, 00:28:53.847 "num_base_bdevs_operational": 3, 00:28:53.847 "base_bdevs_list": [ 00:28:53.847 { 00:28:53.847 "name": "BaseBdev1", 00:28:53.847 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:53.847 "is_configured": true, 00:28:53.847 "data_offset": 2048, 00:28:53.847 "data_size": 63488 00:28:53.847 }, 00:28:53.847 { 00:28:53.847 "name": null, 00:28:53.847 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:53.847 "is_configured": false, 00:28:53.847 "data_offset": 0, 00:28:53.847 "data_size": 63488 00:28:53.847 }, 00:28:53.847 { 00:28:53.847 "name": null, 00:28:53.847 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:53.847 "is_configured": false, 00:28:53.847 "data_offset": 0, 00:28:53.847 "data_size": 63488 00:28:53.847 } 00:28:53.847 ] 00:28:53.847 }' 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:53.847 18:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.105 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.105 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.105 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.105 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:54.105 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.373 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:28:54.373 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:28:54.373 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.373 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.373 [2024-12-06 18:27:25.066614] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:28:54.373 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.373 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:54.373 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:54.374 "name": "Existed_Raid", 00:28:54.374 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:54.374 "strip_size_kb": 0, 00:28:54.374 "state": "configuring", 00:28:54.374 "raid_level": "raid1", 00:28:54.374 "superblock": true, 00:28:54.374 "num_base_bdevs": 3, 00:28:54.374 "num_base_bdevs_discovered": 2, 00:28:54.374 "num_base_bdevs_operational": 3, 00:28:54.374 "base_bdevs_list": [ 00:28:54.374 { 00:28:54.374 "name": "BaseBdev1", 00:28:54.374 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:54.374 "is_configured": true, 00:28:54.374 "data_offset": 2048, 00:28:54.374 "data_size": 63488 00:28:54.374 }, 00:28:54.374 { 00:28:54.374 "name": null, 00:28:54.374 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:54.374 "is_configured": false, 00:28:54.374 "data_offset": 0, 00:28:54.374 "data_size": 63488 00:28:54.374 }, 00:28:54.374 { 00:28:54.374 "name": "BaseBdev3", 00:28:54.374 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:54.374 "is_configured": true, 00:28:54.374 "data_offset": 2048, 00:28:54.374 "data_size": 63488 00:28:54.374 } 00:28:54.374 ] 00:28:54.374 }' 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:54.374 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.632 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.632 [2024-12-06 18:27:25.542105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:54.891 "name": "Existed_Raid", 00:28:54.891 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:54.891 "strip_size_kb": 0, 00:28:54.891 "state": "configuring", 00:28:54.891 "raid_level": "raid1", 00:28:54.891 "superblock": true, 00:28:54.891 "num_base_bdevs": 3, 00:28:54.891 "num_base_bdevs_discovered": 1, 00:28:54.891 "num_base_bdevs_operational": 3, 00:28:54.891 "base_bdevs_list": [ 00:28:54.891 { 00:28:54.891 "name": null, 00:28:54.891 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:54.891 "is_configured": false, 00:28:54.891 "data_offset": 0, 00:28:54.891 "data_size": 63488 00:28:54.891 }, 00:28:54.891 { 00:28:54.891 "name": null, 00:28:54.891 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:54.891 "is_configured": false, 00:28:54.891 "data_offset": 0, 00:28:54.891 "data_size": 63488 00:28:54.891 }, 00:28:54.891 { 00:28:54.891 "name": "BaseBdev3", 00:28:54.891 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:54.891 "is_configured": true, 00:28:54.891 "data_offset": 2048, 00:28:54.891 "data_size": 63488 00:28:54.891 } 00:28:54.891 ] 00:28:54.891 }' 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:54.891 18:27:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.149 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.149 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.149 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.149 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.408 [2024-12-06 18:27:26.144982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.408 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:55.408 "name": "Existed_Raid", 00:28:55.408 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:55.408 "strip_size_kb": 0, 00:28:55.408 "state": "configuring", 00:28:55.408 "raid_level": "raid1", 00:28:55.408 "superblock": true, 00:28:55.408 "num_base_bdevs": 3, 00:28:55.408 "num_base_bdevs_discovered": 2, 00:28:55.408 "num_base_bdevs_operational": 3, 00:28:55.408 "base_bdevs_list": [ 00:28:55.408 { 00:28:55.408 "name": null, 00:28:55.408 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:55.408 "is_configured": false, 00:28:55.408 "data_offset": 0, 00:28:55.408 "data_size": 63488 00:28:55.408 }, 00:28:55.408 { 00:28:55.408 "name": "BaseBdev2", 00:28:55.408 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:55.409 "is_configured": true, 00:28:55.409 "data_offset": 2048, 00:28:55.409 "data_size": 63488 00:28:55.409 }, 00:28:55.409 { 00:28:55.409 "name": "BaseBdev3", 00:28:55.409 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:55.409 "is_configured": true, 00:28:55.409 "data_offset": 2048, 00:28:55.409 "data_size": 63488 00:28:55.409 } 00:28:55.409 ] 00:28:55.409 }' 00:28:55.409 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:55.409 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.667 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:28:55.667 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.667 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.667 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4dc00ec4-8bde-4675-8b50-e990f4deaacb 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.927 [2024-12-06 18:27:26.743469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:28:55.927 [2024-12-06 18:27:26.743723] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:28:55.927 [2024-12-06 18:27:26.743738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:55.927 [2024-12-06 18:27:26.744010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:28:55.927 NewBaseBdev 00:28:55.927 [2024-12-06 18:27:26.744146] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:28:55.927 [2024-12-06 18:27:26.744159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:28:55.927 [2024-12-06 18:27:26.744339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.927 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.927 [ 00:28:55.927 { 00:28:55.927 "name": "NewBaseBdev", 00:28:55.927 "aliases": [ 00:28:55.927 "4dc00ec4-8bde-4675-8b50-e990f4deaacb" 00:28:55.927 ], 00:28:55.927 "product_name": "Malloc disk", 00:28:55.927 "block_size": 512, 00:28:55.927 "num_blocks": 65536, 00:28:55.927 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:55.927 "assigned_rate_limits": { 00:28:55.927 "rw_ios_per_sec": 0, 00:28:55.927 "rw_mbytes_per_sec": 0, 00:28:55.927 "r_mbytes_per_sec": 0, 00:28:55.927 "w_mbytes_per_sec": 0 00:28:55.927 }, 00:28:55.927 "claimed": true, 00:28:55.927 "claim_type": "exclusive_write", 00:28:55.927 "zoned": false, 00:28:55.927 "supported_io_types": { 00:28:55.927 "read": true, 00:28:55.927 "write": true, 00:28:55.927 "unmap": true, 00:28:55.927 "flush": true, 00:28:55.927 "reset": true, 00:28:55.927 "nvme_admin": false, 00:28:55.927 "nvme_io": false, 00:28:55.927 "nvme_io_md": false, 00:28:55.927 "write_zeroes": true, 00:28:55.927 "zcopy": true, 00:28:55.927 "get_zone_info": false, 00:28:55.927 "zone_management": false, 00:28:55.927 "zone_append": false, 00:28:55.927 "compare": false, 00:28:55.927 "compare_and_write": false, 00:28:55.927 "abort": true, 00:28:55.927 "seek_hole": false, 00:28:55.927 "seek_data": false, 00:28:55.927 "copy": true, 00:28:55.927 "nvme_iov_md": false 00:28:55.927 }, 00:28:55.927 "memory_domains": [ 00:28:55.927 { 00:28:55.927 "dma_device_id": "system", 00:28:55.927 "dma_device_type": 1 00:28:55.927 }, 00:28:55.927 { 00:28:55.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:55.927 "dma_device_type": 2 00:28:55.927 } 00:28:55.927 ], 00:28:55.928 "driver_specific": {} 00:28:55.928 } 00:28:55.928 ] 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:55.928 "name": "Existed_Raid", 00:28:55.928 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:55.928 "strip_size_kb": 0, 00:28:55.928 "state": "online", 00:28:55.928 "raid_level": "raid1", 00:28:55.928 "superblock": true, 00:28:55.928 "num_base_bdevs": 3, 00:28:55.928 "num_base_bdevs_discovered": 3, 00:28:55.928 "num_base_bdevs_operational": 3, 00:28:55.928 "base_bdevs_list": [ 00:28:55.928 { 00:28:55.928 "name": "NewBaseBdev", 00:28:55.928 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:55.928 "is_configured": true, 00:28:55.928 "data_offset": 2048, 00:28:55.928 "data_size": 63488 00:28:55.928 }, 00:28:55.928 { 00:28:55.928 "name": "BaseBdev2", 00:28:55.928 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:55.928 "is_configured": true, 00:28:55.928 "data_offset": 2048, 00:28:55.928 "data_size": 63488 00:28:55.928 }, 00:28:55.928 { 00:28:55.928 "name": "BaseBdev3", 00:28:55.928 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:55.928 "is_configured": true, 00:28:55.928 "data_offset": 2048, 00:28:55.928 "data_size": 63488 00:28:55.928 } 00:28:55.928 ] 00:28:55.928 }' 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:55.928 18:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.512 [2024-12-06 18:27:27.267454] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.512 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:28:56.512 "name": "Existed_Raid", 00:28:56.512 "aliases": [ 00:28:56.512 "981c80e1-2e78-429f-98e9-b39799e67bd9" 00:28:56.512 ], 00:28:56.512 "product_name": "Raid Volume", 00:28:56.512 "block_size": 512, 00:28:56.512 "num_blocks": 63488, 00:28:56.512 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:56.512 "assigned_rate_limits": { 00:28:56.512 "rw_ios_per_sec": 0, 00:28:56.512 "rw_mbytes_per_sec": 0, 00:28:56.512 "r_mbytes_per_sec": 0, 00:28:56.512 "w_mbytes_per_sec": 0 00:28:56.512 }, 00:28:56.512 "claimed": false, 00:28:56.512 "zoned": false, 00:28:56.512 "supported_io_types": { 00:28:56.512 "read": true, 00:28:56.512 "write": true, 00:28:56.512 "unmap": false, 00:28:56.512 "flush": false, 00:28:56.512 "reset": true, 00:28:56.512 "nvme_admin": false, 00:28:56.512 "nvme_io": false, 00:28:56.512 "nvme_io_md": false, 00:28:56.512 "write_zeroes": true, 00:28:56.512 "zcopy": false, 00:28:56.512 "get_zone_info": false, 00:28:56.512 "zone_management": false, 00:28:56.512 "zone_append": false, 00:28:56.512 "compare": false, 00:28:56.512 "compare_and_write": false, 00:28:56.512 "abort": false, 00:28:56.512 "seek_hole": false, 00:28:56.512 "seek_data": false, 00:28:56.512 "copy": false, 00:28:56.512 "nvme_iov_md": false 00:28:56.512 }, 00:28:56.512 "memory_domains": [ 00:28:56.512 { 00:28:56.512 "dma_device_id": "system", 00:28:56.512 "dma_device_type": 1 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.512 "dma_device_type": 2 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "dma_device_id": "system", 00:28:56.512 "dma_device_type": 1 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.512 "dma_device_type": 2 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "dma_device_id": "system", 00:28:56.512 "dma_device_type": 1 00:28:56.512 }, 00:28:56.512 { 00:28:56.512 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:28:56.512 "dma_device_type": 2 00:28:56.512 } 00:28:56.513 ], 00:28:56.513 "driver_specific": { 00:28:56.513 "raid": { 00:28:56.513 "uuid": "981c80e1-2e78-429f-98e9-b39799e67bd9", 00:28:56.513 "strip_size_kb": 0, 00:28:56.513 "state": "online", 00:28:56.513 "raid_level": "raid1", 00:28:56.513 "superblock": true, 00:28:56.513 "num_base_bdevs": 3, 00:28:56.513 "num_base_bdevs_discovered": 3, 00:28:56.513 "num_base_bdevs_operational": 3, 00:28:56.513 "base_bdevs_list": [ 00:28:56.513 { 00:28:56.513 "name": "NewBaseBdev", 00:28:56.513 "uuid": "4dc00ec4-8bde-4675-8b50-e990f4deaacb", 00:28:56.513 "is_configured": true, 00:28:56.513 "data_offset": 2048, 00:28:56.513 "data_size": 63488 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "name": "BaseBdev2", 00:28:56.513 "uuid": "30e497ed-3fff-4611-91dc-a2ca75955ab8", 00:28:56.513 "is_configured": true, 00:28:56.513 "data_offset": 2048, 00:28:56.513 "data_size": 63488 00:28:56.513 }, 00:28:56.513 { 00:28:56.513 "name": "BaseBdev3", 00:28:56.513 "uuid": "91712f31-2070-4c2c-85a2-e4b578bc78b6", 00:28:56.513 "is_configured": true, 00:28:56.513 "data_offset": 2048, 00:28:56.513 "data_size": 63488 00:28:56.513 } 00:28:56.513 ] 00:28:56.513 } 00:28:56.513 } 00:28:56.513 }' 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:28:56.513 BaseBdev2 00:28:56.513 BaseBdev3' 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.513 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:56.775 [2024-12-06 18:27:27.522824] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:28:56.775 [2024-12-06 18:27:27.522874] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:28:56.775 [2024-12-06 18:27:27.522947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:28:56.775 [2024-12-06 18:27:27.523372] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:28:56.775 [2024-12-06 18:27:27.523440] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67759 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67759 ']' 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67759 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67759 00:28:56.775 killing process with pid 67759 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67759' 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67759 00:28:56.775 [2024-12-06 18:27:27.570472] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:28:56.775 18:27:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67759 00:28:57.034 [2024-12-06 18:27:27.875947] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:28:58.410 18:27:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:28:58.410 00:28:58.410 real 0m10.492s 00:28:58.410 user 0m16.528s 00:28:58.410 sys 0m2.147s 00:28:58.410 ************************************ 00:28:58.410 END TEST raid_state_function_test_sb 00:28:58.410 ************************************ 00:28:58.410 18:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.410 18:27:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:28:58.410 18:27:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:28:58.410 18:27:29 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:58.410 18:27:29 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.410 18:27:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:28:58.410 ************************************ 00:28:58.410 START TEST raid_superblock_test 00:28:58.410 ************************************ 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:28:58.410 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68374 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68374 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68374 ']' 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:58.411 18:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:28:58.411 [2024-12-06 18:27:29.207964] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:28:58.411 [2024-12-06 18:27:29.208090] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68374 ] 00:28:58.669 [2024-12-06 18:27:29.387331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.669 [2024-12-06 18:27:29.500939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.928 [2024-12-06 18:27:29.708073] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:58.928 [2024-12-06 18:27:29.708121] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.187 malloc1 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.187 [2024-12-06 18:27:30.081190] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:28:59.187 [2024-12-06 18:27:30.081378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.187 [2024-12-06 18:27:30.081438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:28:59.187 [2024-12-06 18:27:30.081553] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.187 [2024-12-06 18:27:30.084038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.187 [2024-12-06 18:27:30.084195] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:28:59.187 pt1 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.187 malloc2 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.187 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.448 [2024-12-06 18:27:30.136654] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:28:59.448 [2024-12-06 18:27:30.136828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.448 [2024-12-06 18:27:30.136894] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:28:59.448 [2024-12-06 18:27:30.136983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.448 [2024-12-06 18:27:30.139366] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.448 [2024-12-06 18:27:30.139502] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:28:59.448 pt2 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.448 malloc3 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.448 [2024-12-06 18:27:30.203909] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:28:59.448 [2024-12-06 18:27:30.204068] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:28:59.448 [2024-12-06 18:27:30.204126] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:28:59.448 [2024-12-06 18:27:30.204213] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:28:59.448 [2024-12-06 18:27:30.206636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:28:59.448 [2024-12-06 18:27:30.206793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:28:59.448 pt3 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.448 [2024-12-06 18:27:30.215940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:28:59.448 [2024-12-06 18:27:30.218040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:28:59.448 [2024-12-06 18:27:30.218275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:28:59.448 [2024-12-06 18:27:30.218439] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:28:59.448 [2024-12-06 18:27:30.218462] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:28:59.448 [2024-12-06 18:27:30.218700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:28:59.448 [2024-12-06 18:27:30.218864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:28:59.448 [2024-12-06 18:27:30.218878] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:28:59.448 [2024-12-06 18:27:30.219013] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:28:59.448 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:28:59.449 "name": "raid_bdev1", 00:28:59.449 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:28:59.449 "strip_size_kb": 0, 00:28:59.449 "state": "online", 00:28:59.449 "raid_level": "raid1", 00:28:59.449 "superblock": true, 00:28:59.449 "num_base_bdevs": 3, 00:28:59.449 "num_base_bdevs_discovered": 3, 00:28:59.449 "num_base_bdevs_operational": 3, 00:28:59.449 "base_bdevs_list": [ 00:28:59.449 { 00:28:59.449 "name": "pt1", 00:28:59.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:28:59.449 "is_configured": true, 00:28:59.449 "data_offset": 2048, 00:28:59.449 "data_size": 63488 00:28:59.449 }, 00:28:59.449 { 00:28:59.449 "name": "pt2", 00:28:59.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:28:59.449 "is_configured": true, 00:28:59.449 "data_offset": 2048, 00:28:59.449 "data_size": 63488 00:28:59.449 }, 00:28:59.449 { 00:28:59.449 "name": "pt3", 00:28:59.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:28:59.449 "is_configured": true, 00:28:59.449 "data_offset": 2048, 00:28:59.449 "data_size": 63488 00:28:59.449 } 00:28:59.449 ] 00:28:59.449 }' 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:28:59.449 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.016 [2024-12-06 18:27:30.687600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.016 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:00.016 "name": "raid_bdev1", 00:29:00.016 "aliases": [ 00:29:00.016 "4b875e91-de23-418c-8155-a30f88bdff50" 00:29:00.016 ], 00:29:00.016 "product_name": "Raid Volume", 00:29:00.016 "block_size": 512, 00:29:00.016 "num_blocks": 63488, 00:29:00.016 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:00.016 "assigned_rate_limits": { 00:29:00.016 "rw_ios_per_sec": 0, 00:29:00.016 "rw_mbytes_per_sec": 0, 00:29:00.016 "r_mbytes_per_sec": 0, 00:29:00.016 "w_mbytes_per_sec": 0 00:29:00.016 }, 00:29:00.016 "claimed": false, 00:29:00.016 "zoned": false, 00:29:00.016 "supported_io_types": { 00:29:00.016 "read": true, 00:29:00.016 "write": true, 00:29:00.016 "unmap": false, 00:29:00.016 "flush": false, 00:29:00.016 "reset": true, 00:29:00.016 "nvme_admin": false, 00:29:00.016 "nvme_io": false, 00:29:00.016 "nvme_io_md": false, 00:29:00.016 "write_zeroes": true, 00:29:00.016 "zcopy": false, 00:29:00.016 "get_zone_info": false, 00:29:00.016 "zone_management": false, 00:29:00.016 "zone_append": false, 00:29:00.016 "compare": false, 00:29:00.016 "compare_and_write": false, 00:29:00.016 "abort": false, 00:29:00.016 "seek_hole": false, 00:29:00.016 "seek_data": false, 00:29:00.016 "copy": false, 00:29:00.016 "nvme_iov_md": false 00:29:00.016 }, 00:29:00.016 "memory_domains": [ 00:29:00.016 { 00:29:00.016 "dma_device_id": "system", 00:29:00.016 "dma_device_type": 1 00:29:00.016 }, 00:29:00.016 { 00:29:00.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:00.017 "dma_device_type": 2 00:29:00.017 }, 00:29:00.017 { 00:29:00.017 "dma_device_id": "system", 00:29:00.017 "dma_device_type": 1 00:29:00.017 }, 00:29:00.017 { 00:29:00.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:00.017 "dma_device_type": 2 00:29:00.017 }, 00:29:00.017 { 00:29:00.017 "dma_device_id": "system", 00:29:00.017 "dma_device_type": 1 00:29:00.017 }, 00:29:00.017 { 00:29:00.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:00.017 "dma_device_type": 2 00:29:00.017 } 00:29:00.017 ], 00:29:00.017 "driver_specific": { 00:29:00.017 "raid": { 00:29:00.017 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:00.017 "strip_size_kb": 0, 00:29:00.017 "state": "online", 00:29:00.017 "raid_level": "raid1", 00:29:00.017 "superblock": true, 00:29:00.017 "num_base_bdevs": 3, 00:29:00.017 "num_base_bdevs_discovered": 3, 00:29:00.017 "num_base_bdevs_operational": 3, 00:29:00.017 "base_bdevs_list": [ 00:29:00.017 { 00:29:00.017 "name": "pt1", 00:29:00.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:00.017 "is_configured": true, 00:29:00.017 "data_offset": 2048, 00:29:00.017 "data_size": 63488 00:29:00.017 }, 00:29:00.017 { 00:29:00.017 "name": "pt2", 00:29:00.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:00.017 "is_configured": true, 00:29:00.017 "data_offset": 2048, 00:29:00.017 "data_size": 63488 00:29:00.017 }, 00:29:00.017 { 00:29:00.017 "name": "pt3", 00:29:00.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:00.017 "is_configured": true, 00:29:00.017 "data_offset": 2048, 00:29:00.017 "data_size": 63488 00:29:00.017 } 00:29:00.017 ] 00:29:00.017 } 00:29:00.017 } 00:29:00.017 }' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:00.017 pt2 00:29:00.017 pt3' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.017 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.276 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:00.276 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:00.276 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:00.276 18:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:00.276 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.276 18:27:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.276 [2024-12-06 18:27:30.975143] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4b875e91-de23-418c-8155-a30f88bdff50 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4b875e91-de23-418c-8155-a30f88bdff50 ']' 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.276 [2024-12-06 18:27:31.018837] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:00.276 [2024-12-06 18:27:31.018869] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:00.276 [2024-12-06 18:27:31.018948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:00.276 [2024-12-06 18:27:31.019021] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:00.276 [2024-12-06 18:27:31.019032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:00.276 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.277 [2024-12-06 18:27:31.158693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:00.277 [2024-12-06 18:27:31.160829] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:00.277 [2024-12-06 18:27:31.160896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:00.277 [2024-12-06 18:27:31.160948] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:00.277 [2024-12-06 18:27:31.161010] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:00.277 [2024-12-06 18:27:31.161044] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:29:00.277 [2024-12-06 18:27:31.161065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:00.277 [2024-12-06 18:27:31.161076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:00.277 request: 00:29:00.277 { 00:29:00.277 "name": "raid_bdev1", 00:29:00.277 "raid_level": "raid1", 00:29:00.277 "base_bdevs": [ 00:29:00.277 "malloc1", 00:29:00.277 "malloc2", 00:29:00.277 "malloc3" 00:29:00.277 ], 00:29:00.277 "superblock": false, 00:29:00.277 "method": "bdev_raid_create", 00:29:00.277 "req_id": 1 00:29:00.277 } 00:29:00.277 Got JSON-RPC error response 00:29:00.277 response: 00:29:00.277 { 00:29:00.277 "code": -17, 00:29:00.277 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:00.277 } 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.277 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.536 [2024-12-06 18:27:31.226550] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:00.536 [2024-12-06 18:27:31.226617] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:00.536 [2024-12-06 18:27:31.226640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:00.536 [2024-12-06 18:27:31.226652] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:00.536 [2024-12-06 18:27:31.229099] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:00.536 [2024-12-06 18:27:31.229142] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:00.536 [2024-12-06 18:27:31.229246] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:00.536 [2024-12-06 18:27:31.229309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:00.536 pt1 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:00.536 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.537 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.537 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.537 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.537 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.537 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:00.537 "name": "raid_bdev1", 00:29:00.537 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:00.537 "strip_size_kb": 0, 00:29:00.537 "state": "configuring", 00:29:00.537 "raid_level": "raid1", 00:29:00.537 "superblock": true, 00:29:00.537 "num_base_bdevs": 3, 00:29:00.537 "num_base_bdevs_discovered": 1, 00:29:00.537 "num_base_bdevs_operational": 3, 00:29:00.537 "base_bdevs_list": [ 00:29:00.537 { 00:29:00.537 "name": "pt1", 00:29:00.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:00.537 "is_configured": true, 00:29:00.537 "data_offset": 2048, 00:29:00.537 "data_size": 63488 00:29:00.537 }, 00:29:00.537 { 00:29:00.537 "name": null, 00:29:00.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:00.537 "is_configured": false, 00:29:00.537 "data_offset": 2048, 00:29:00.537 "data_size": 63488 00:29:00.537 }, 00:29:00.537 { 00:29:00.537 "name": null, 00:29:00.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:00.537 "is_configured": false, 00:29:00.537 "data_offset": 2048, 00:29:00.537 "data_size": 63488 00:29:00.537 } 00:29:00.537 ] 00:29:00.537 }' 00:29:00.537 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:00.537 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.796 [2024-12-06 18:27:31.653964] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:00.796 [2024-12-06 18:27:31.654035] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:00.796 [2024-12-06 18:27:31.654061] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:29:00.796 [2024-12-06 18:27:31.654073] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:00.796 [2024-12-06 18:27:31.654541] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:00.796 [2024-12-06 18:27:31.654563] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:00.796 [2024-12-06 18:27:31.654650] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:00.796 [2024-12-06 18:27:31.654674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:00.796 pt2 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.796 [2024-12-06 18:27:31.661933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:00.796 "name": "raid_bdev1", 00:29:00.796 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:00.796 "strip_size_kb": 0, 00:29:00.796 "state": "configuring", 00:29:00.796 "raid_level": "raid1", 00:29:00.796 "superblock": true, 00:29:00.796 "num_base_bdevs": 3, 00:29:00.796 "num_base_bdevs_discovered": 1, 00:29:00.796 "num_base_bdevs_operational": 3, 00:29:00.796 "base_bdevs_list": [ 00:29:00.796 { 00:29:00.796 "name": "pt1", 00:29:00.796 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:00.796 "is_configured": true, 00:29:00.796 "data_offset": 2048, 00:29:00.796 "data_size": 63488 00:29:00.796 }, 00:29:00.796 { 00:29:00.796 "name": null, 00:29:00.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:00.796 "is_configured": false, 00:29:00.796 "data_offset": 0, 00:29:00.796 "data_size": 63488 00:29:00.796 }, 00:29:00.796 { 00:29:00.796 "name": null, 00:29:00.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:00.796 "is_configured": false, 00:29:00.796 "data_offset": 2048, 00:29:00.796 "data_size": 63488 00:29:00.796 } 00:29:00.796 ] 00:29:00.796 }' 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:00.796 18:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.364 [2024-12-06 18:27:32.121806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:01.364 [2024-12-06 18:27:32.121888] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:01.364 [2024-12-06 18:27:32.121911] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:29:01.364 [2024-12-06 18:27:32.121926] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:01.364 [2024-12-06 18:27:32.122444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:01.364 [2024-12-06 18:27:32.122469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:01.364 [2024-12-06 18:27:32.122549] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:01.364 [2024-12-06 18:27:32.122586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:01.364 pt2 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.364 [2024-12-06 18:27:32.133801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:01.364 [2024-12-06 18:27:32.133858] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:01.364 [2024-12-06 18:27:32.133875] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:01.364 [2024-12-06 18:27:32.133888] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:01.364 [2024-12-06 18:27:32.134321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:01.364 [2024-12-06 18:27:32.134350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:01.364 [2024-12-06 18:27:32.134416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:01.364 [2024-12-06 18:27:32.134439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:01.364 [2024-12-06 18:27:32.134579] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:01.364 [2024-12-06 18:27:32.134596] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:01.364 [2024-12-06 18:27:32.134861] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:01.364 [2024-12-06 18:27:32.135021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:01.364 [2024-12-06 18:27:32.135031] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:01.364 [2024-12-06 18:27:32.135202] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:01.364 pt3 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:01.364 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:01.365 "name": "raid_bdev1", 00:29:01.365 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:01.365 "strip_size_kb": 0, 00:29:01.365 "state": "online", 00:29:01.365 "raid_level": "raid1", 00:29:01.365 "superblock": true, 00:29:01.365 "num_base_bdevs": 3, 00:29:01.365 "num_base_bdevs_discovered": 3, 00:29:01.365 "num_base_bdevs_operational": 3, 00:29:01.365 "base_bdevs_list": [ 00:29:01.365 { 00:29:01.365 "name": "pt1", 00:29:01.365 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:01.365 "is_configured": true, 00:29:01.365 "data_offset": 2048, 00:29:01.365 "data_size": 63488 00:29:01.365 }, 00:29:01.365 { 00:29:01.365 "name": "pt2", 00:29:01.365 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:01.365 "is_configured": true, 00:29:01.365 "data_offset": 2048, 00:29:01.365 "data_size": 63488 00:29:01.365 }, 00:29:01.365 { 00:29:01.365 "name": "pt3", 00:29:01.365 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:01.365 "is_configured": true, 00:29:01.365 "data_offset": 2048, 00:29:01.365 "data_size": 63488 00:29:01.365 } 00:29:01.365 ] 00:29:01.365 }' 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:01.365 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.933 [2024-12-06 18:27:32.598137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.933 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:01.933 "name": "raid_bdev1", 00:29:01.933 "aliases": [ 00:29:01.933 "4b875e91-de23-418c-8155-a30f88bdff50" 00:29:01.933 ], 00:29:01.933 "product_name": "Raid Volume", 00:29:01.933 "block_size": 512, 00:29:01.933 "num_blocks": 63488, 00:29:01.933 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:01.933 "assigned_rate_limits": { 00:29:01.933 "rw_ios_per_sec": 0, 00:29:01.933 "rw_mbytes_per_sec": 0, 00:29:01.933 "r_mbytes_per_sec": 0, 00:29:01.933 "w_mbytes_per_sec": 0 00:29:01.933 }, 00:29:01.933 "claimed": false, 00:29:01.933 "zoned": false, 00:29:01.933 "supported_io_types": { 00:29:01.933 "read": true, 00:29:01.933 "write": true, 00:29:01.933 "unmap": false, 00:29:01.933 "flush": false, 00:29:01.933 "reset": true, 00:29:01.933 "nvme_admin": false, 00:29:01.933 "nvme_io": false, 00:29:01.933 "nvme_io_md": false, 00:29:01.933 "write_zeroes": true, 00:29:01.933 "zcopy": false, 00:29:01.933 "get_zone_info": false, 00:29:01.933 "zone_management": false, 00:29:01.933 "zone_append": false, 00:29:01.934 "compare": false, 00:29:01.934 "compare_and_write": false, 00:29:01.934 "abort": false, 00:29:01.934 "seek_hole": false, 00:29:01.934 "seek_data": false, 00:29:01.934 "copy": false, 00:29:01.934 "nvme_iov_md": false 00:29:01.934 }, 00:29:01.934 "memory_domains": [ 00:29:01.934 { 00:29:01.934 "dma_device_id": "system", 00:29:01.934 "dma_device_type": 1 00:29:01.934 }, 00:29:01.934 { 00:29:01.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:01.934 "dma_device_type": 2 00:29:01.934 }, 00:29:01.934 { 00:29:01.934 "dma_device_id": "system", 00:29:01.934 "dma_device_type": 1 00:29:01.934 }, 00:29:01.934 { 00:29:01.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:01.934 "dma_device_type": 2 00:29:01.934 }, 00:29:01.934 { 00:29:01.934 "dma_device_id": "system", 00:29:01.934 "dma_device_type": 1 00:29:01.934 }, 00:29:01.934 { 00:29:01.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:01.934 "dma_device_type": 2 00:29:01.934 } 00:29:01.934 ], 00:29:01.934 "driver_specific": { 00:29:01.934 "raid": { 00:29:01.934 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:01.934 "strip_size_kb": 0, 00:29:01.934 "state": "online", 00:29:01.934 "raid_level": "raid1", 00:29:01.934 "superblock": true, 00:29:01.934 "num_base_bdevs": 3, 00:29:01.934 "num_base_bdevs_discovered": 3, 00:29:01.934 "num_base_bdevs_operational": 3, 00:29:01.934 "base_bdevs_list": [ 00:29:01.934 { 00:29:01.934 "name": "pt1", 00:29:01.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:01.934 "is_configured": true, 00:29:01.934 "data_offset": 2048, 00:29:01.934 "data_size": 63488 00:29:01.934 }, 00:29:01.934 { 00:29:01.934 "name": "pt2", 00:29:01.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:01.934 "is_configured": true, 00:29:01.934 "data_offset": 2048, 00:29:01.934 "data_size": 63488 00:29:01.934 }, 00:29:01.934 { 00:29:01.934 "name": "pt3", 00:29:01.934 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:01.934 "is_configured": true, 00:29:01.934 "data_offset": 2048, 00:29:01.934 "data_size": 63488 00:29:01.934 } 00:29:01.934 ] 00:29:01.934 } 00:29:01.934 } 00:29:01.934 }' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:01.934 pt2 00:29:01.934 pt3' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.934 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:01.934 [2024-12-06 18:27:32.878064] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4b875e91-de23-418c-8155-a30f88bdff50 '!=' 4b875e91-de23-418c-8155-a30f88bdff50 ']' 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.194 [2024-12-06 18:27:32.921852] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:02.194 "name": "raid_bdev1", 00:29:02.194 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:02.194 "strip_size_kb": 0, 00:29:02.194 "state": "online", 00:29:02.194 "raid_level": "raid1", 00:29:02.194 "superblock": true, 00:29:02.194 "num_base_bdevs": 3, 00:29:02.194 "num_base_bdevs_discovered": 2, 00:29:02.194 "num_base_bdevs_operational": 2, 00:29:02.194 "base_bdevs_list": [ 00:29:02.194 { 00:29:02.194 "name": null, 00:29:02.194 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.194 "is_configured": false, 00:29:02.194 "data_offset": 0, 00:29:02.194 "data_size": 63488 00:29:02.194 }, 00:29:02.194 { 00:29:02.194 "name": "pt2", 00:29:02.194 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:02.194 "is_configured": true, 00:29:02.194 "data_offset": 2048, 00:29:02.194 "data_size": 63488 00:29:02.194 }, 00:29:02.194 { 00:29:02.194 "name": "pt3", 00:29:02.194 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:02.194 "is_configured": true, 00:29:02.194 "data_offset": 2048, 00:29:02.194 "data_size": 63488 00:29:02.194 } 00:29:02.194 ] 00:29:02.194 }' 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:02.194 18:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.453 [2024-12-06 18:27:33.353407] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:02.453 [2024-12-06 18:27:33.353443] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:02.453 [2024-12-06 18:27:33.353525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:02.453 [2024-12-06 18:27:33.353585] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:02.453 [2024-12-06 18:27:33.353603] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.453 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.711 [2024-12-06 18:27:33.437256] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:02.711 [2024-12-06 18:27:33.437316] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.711 [2024-12-06 18:27:33.437334] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:29:02.711 [2024-12-06 18:27:33.437348] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.711 [2024-12-06 18:27:33.439756] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.711 [2024-12-06 18:27:33.439803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:02.711 [2024-12-06 18:27:33.439877] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:02.711 [2024-12-06 18:27:33.439925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:02.711 pt2 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.711 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:02.711 "name": "raid_bdev1", 00:29:02.711 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:02.711 "strip_size_kb": 0, 00:29:02.711 "state": "configuring", 00:29:02.712 "raid_level": "raid1", 00:29:02.712 "superblock": true, 00:29:02.712 "num_base_bdevs": 3, 00:29:02.712 "num_base_bdevs_discovered": 1, 00:29:02.712 "num_base_bdevs_operational": 2, 00:29:02.712 "base_bdevs_list": [ 00:29:02.712 { 00:29:02.712 "name": null, 00:29:02.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:02.712 "is_configured": false, 00:29:02.712 "data_offset": 2048, 00:29:02.712 "data_size": 63488 00:29:02.712 }, 00:29:02.712 { 00:29:02.712 "name": "pt2", 00:29:02.712 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:02.712 "is_configured": true, 00:29:02.712 "data_offset": 2048, 00:29:02.712 "data_size": 63488 00:29:02.712 }, 00:29:02.712 { 00:29:02.712 "name": null, 00:29:02.712 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:02.712 "is_configured": false, 00:29:02.712 "data_offset": 2048, 00:29:02.712 "data_size": 63488 00:29:02.712 } 00:29:02.712 ] 00:29:02.712 }' 00:29:02.712 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:02.712 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.970 [2024-12-06 18:27:33.872742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:02.970 [2024-12-06 18:27:33.872822] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:02.970 [2024-12-06 18:27:33.872846] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:02.970 [2024-12-06 18:27:33.872861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:02.970 [2024-12-06 18:27:33.873373] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:02.970 [2024-12-06 18:27:33.873406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:02.970 [2024-12-06 18:27:33.873515] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:02.970 [2024-12-06 18:27:33.873548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:02.970 [2024-12-06 18:27:33.873658] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:02.970 [2024-12-06 18:27:33.873684] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:02.970 [2024-12-06 18:27:33.873986] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:02.970 [2024-12-06 18:27:33.874139] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:02.970 [2024-12-06 18:27:33.874150] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:02.970 [2024-12-06 18:27:33.874325] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:02.970 pt3 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:02.970 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.229 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.229 "name": "raid_bdev1", 00:29:03.229 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:03.229 "strip_size_kb": 0, 00:29:03.229 "state": "online", 00:29:03.229 "raid_level": "raid1", 00:29:03.229 "superblock": true, 00:29:03.229 "num_base_bdevs": 3, 00:29:03.229 "num_base_bdevs_discovered": 2, 00:29:03.229 "num_base_bdevs_operational": 2, 00:29:03.229 "base_bdevs_list": [ 00:29:03.229 { 00:29:03.229 "name": null, 00:29:03.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.229 "is_configured": false, 00:29:03.229 "data_offset": 2048, 00:29:03.229 "data_size": 63488 00:29:03.229 }, 00:29:03.229 { 00:29:03.229 "name": "pt2", 00:29:03.229 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:03.229 "is_configured": true, 00:29:03.229 "data_offset": 2048, 00:29:03.229 "data_size": 63488 00:29:03.229 }, 00:29:03.229 { 00:29:03.229 "name": "pt3", 00:29:03.229 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:03.229 "is_configured": true, 00:29:03.229 "data_offset": 2048, 00:29:03.229 "data_size": 63488 00:29:03.229 } 00:29:03.229 ] 00:29:03.229 }' 00:29:03.229 18:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.229 18:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:03.515 [2024-12-06 18:27:34.328122] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:03.515 [2024-12-06 18:27:34.328171] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:03.515 [2024-12-06 18:27:34.328250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:03.515 [2024-12-06 18:27:34.328313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:03.515 [2024-12-06 18:27:34.328324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:03.515 [2024-12-06 18:27:34.396045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:03.515 [2024-12-06 18:27:34.396127] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:03.515 [2024-12-06 18:27:34.396160] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:03.515 [2024-12-06 18:27:34.396183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:03.515 [2024-12-06 18:27:34.398860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:03.515 [2024-12-06 18:27:34.398902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:03.515 [2024-12-06 18:27:34.398985] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:03.515 [2024-12-06 18:27:34.399039] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:03.515 [2024-12-06 18:27:34.399179] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:29:03.515 [2024-12-06 18:27:34.399192] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:03.515 [2024-12-06 18:27:34.399211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:29:03.515 [2024-12-06 18:27:34.399290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:03.515 pt1 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:03.515 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.778 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:03.778 "name": "raid_bdev1", 00:29:03.778 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:03.778 "strip_size_kb": 0, 00:29:03.778 "state": "configuring", 00:29:03.778 "raid_level": "raid1", 00:29:03.778 "superblock": true, 00:29:03.778 "num_base_bdevs": 3, 00:29:03.778 "num_base_bdevs_discovered": 1, 00:29:03.778 "num_base_bdevs_operational": 2, 00:29:03.778 "base_bdevs_list": [ 00:29:03.778 { 00:29:03.778 "name": null, 00:29:03.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:03.778 "is_configured": false, 00:29:03.778 "data_offset": 2048, 00:29:03.778 "data_size": 63488 00:29:03.778 }, 00:29:03.778 { 00:29:03.778 "name": "pt2", 00:29:03.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:03.778 "is_configured": true, 00:29:03.778 "data_offset": 2048, 00:29:03.778 "data_size": 63488 00:29:03.778 }, 00:29:03.778 { 00:29:03.778 "name": null, 00:29:03.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:03.778 "is_configured": false, 00:29:03.778 "data_offset": 2048, 00:29:03.778 "data_size": 63488 00:29:03.778 } 00:29:03.778 ] 00:29:03.778 }' 00:29:03.778 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:03.778 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.037 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.037 [2024-12-06 18:27:34.859438] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:04.037 [2024-12-06 18:27:34.859512] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:04.037 [2024-12-06 18:27:34.859539] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:29:04.037 [2024-12-06 18:27:34.859552] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:04.037 [2024-12-06 18:27:34.860078] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:04.037 [2024-12-06 18:27:34.860107] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:04.037 [2024-12-06 18:27:34.860204] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:04.037 [2024-12-06 18:27:34.860231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:04.037 [2024-12-06 18:27:34.860357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:29:04.037 [2024-12-06 18:27:34.860367] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:04.037 [2024-12-06 18:27:34.860635] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:29:04.038 [2024-12-06 18:27:34.860789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:29:04.038 [2024-12-06 18:27:34.860805] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:29:04.038 [2024-12-06 18:27:34.860944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:04.038 pt3 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:04.038 "name": "raid_bdev1", 00:29:04.038 "uuid": "4b875e91-de23-418c-8155-a30f88bdff50", 00:29:04.038 "strip_size_kb": 0, 00:29:04.038 "state": "online", 00:29:04.038 "raid_level": "raid1", 00:29:04.038 "superblock": true, 00:29:04.038 "num_base_bdevs": 3, 00:29:04.038 "num_base_bdevs_discovered": 2, 00:29:04.038 "num_base_bdevs_operational": 2, 00:29:04.038 "base_bdevs_list": [ 00:29:04.038 { 00:29:04.038 "name": null, 00:29:04.038 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:04.038 "is_configured": false, 00:29:04.038 "data_offset": 2048, 00:29:04.038 "data_size": 63488 00:29:04.038 }, 00:29:04.038 { 00:29:04.038 "name": "pt2", 00:29:04.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:04.038 "is_configured": true, 00:29:04.038 "data_offset": 2048, 00:29:04.038 "data_size": 63488 00:29:04.038 }, 00:29:04.038 { 00:29:04.038 "name": "pt3", 00:29:04.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:04.038 "is_configured": true, 00:29:04.038 "data_offset": 2048, 00:29:04.038 "data_size": 63488 00:29:04.038 } 00:29:04.038 ] 00:29:04.038 }' 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:04.038 18:27:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:29:04.605 [2024-12-06 18:27:35.327021] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4b875e91-de23-418c-8155-a30f88bdff50 '!=' 4b875e91-de23-418c-8155-a30f88bdff50 ']' 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68374 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68374 ']' 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68374 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68374 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.605 killing process with pid 68374 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68374' 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68374 00:29:04.605 [2024-12-06 18:27:35.410510] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:04.605 [2024-12-06 18:27:35.410620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:04.605 [2024-12-06 18:27:35.410683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:04.605 18:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68374 00:29:04.605 [2024-12-06 18:27:35.410698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:29:04.863 [2024-12-06 18:27:35.718110] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:06.236 18:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:29:06.236 00:29:06.236 real 0m7.749s 00:29:06.236 user 0m12.183s 00:29:06.236 sys 0m1.525s 00:29:06.236 18:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.236 ************************************ 00:29:06.236 END TEST raid_superblock_test 00:29:06.236 ************************************ 00:29:06.236 18:27:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:06.236 18:27:36 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:29:06.236 18:27:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:06.236 18:27:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.236 18:27:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:06.236 ************************************ 00:29:06.236 START TEST raid_read_error_test 00:29:06.236 ************************************ 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.WMCS74PR0s 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68824 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68824 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68824 ']' 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.236 18:27:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:06.236 [2024-12-06 18:27:37.054092] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:06.236 [2024-12-06 18:27:37.054244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68824 ] 00:29:06.494 [2024-12-06 18:27:37.226120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:06.494 [2024-12-06 18:27:37.340090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:06.752 [2024-12-06 18:27:37.551738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:06.752 [2024-12-06 18:27:37.551794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.009 BaseBdev1_malloc 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.009 true 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.009 [2024-12-06 18:27:37.951935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:07.009 [2024-12-06 18:27:37.951988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:07.009 [2024-12-06 18:27:37.952010] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:07.009 [2024-12-06 18:27:37.952025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:07.009 [2024-12-06 18:27:37.954405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:07.009 [2024-12-06 18:27:37.954446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:07.009 BaseBdev1 00:29:07.009 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.268 18:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:07.268 18:27:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:07.268 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.268 18:27:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.268 BaseBdev2_malloc 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.268 true 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.268 [2024-12-06 18:27:38.022147] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:07.268 [2024-12-06 18:27:38.022212] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:07.268 [2024-12-06 18:27:38.022233] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:07.268 [2024-12-06 18:27:38.022247] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:07.268 [2024-12-06 18:27:38.024601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:07.268 [2024-12-06 18:27:38.024640] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:07.268 BaseBdev2 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.268 BaseBdev3_malloc 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.268 true 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.268 [2024-12-06 18:27:38.103377] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:07.268 [2024-12-06 18:27:38.103427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:07.268 [2024-12-06 18:27:38.103446] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:07.268 [2024-12-06 18:27:38.103460] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:07.268 [2024-12-06 18:27:38.106073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:07.268 [2024-12-06 18:27:38.106114] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:07.268 BaseBdev3 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.268 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.268 [2024-12-06 18:27:38.115438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:07.268 [2024-12-06 18:27:38.117470] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:07.268 [2024-12-06 18:27:38.117547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:07.268 [2024-12-06 18:27:38.117750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:07.268 [2024-12-06 18:27:38.117763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:07.268 [2024-12-06 18:27:38.118020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:29:07.268 [2024-12-06 18:27:38.118205] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:07.268 [2024-12-06 18:27:38.118219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:07.269 [2024-12-06 18:27:38.118356] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:07.269 "name": "raid_bdev1", 00:29:07.269 "uuid": "6c2cdec3-e518-4de5-b45f-fbfc2b1b952a", 00:29:07.269 "strip_size_kb": 0, 00:29:07.269 "state": "online", 00:29:07.269 "raid_level": "raid1", 00:29:07.269 "superblock": true, 00:29:07.269 "num_base_bdevs": 3, 00:29:07.269 "num_base_bdevs_discovered": 3, 00:29:07.269 "num_base_bdevs_operational": 3, 00:29:07.269 "base_bdevs_list": [ 00:29:07.269 { 00:29:07.269 "name": "BaseBdev1", 00:29:07.269 "uuid": "a49989de-1db2-5852-a9cc-33a46d0735e5", 00:29:07.269 "is_configured": true, 00:29:07.269 "data_offset": 2048, 00:29:07.269 "data_size": 63488 00:29:07.269 }, 00:29:07.269 { 00:29:07.269 "name": "BaseBdev2", 00:29:07.269 "uuid": "5a537147-49e9-5e1c-8719-b017dd0f9d97", 00:29:07.269 "is_configured": true, 00:29:07.269 "data_offset": 2048, 00:29:07.269 "data_size": 63488 00:29:07.269 }, 00:29:07.269 { 00:29:07.269 "name": "BaseBdev3", 00:29:07.269 "uuid": "78947ee2-1dcb-5c58-8a94-adf715815e62", 00:29:07.269 "is_configured": true, 00:29:07.269 "data_offset": 2048, 00:29:07.269 "data_size": 63488 00:29:07.269 } 00:29:07.269 ] 00:29:07.269 }' 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:07.269 18:27:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:07.834 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:07.834 18:27:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:07.834 [2024-12-06 18:27:38.648277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:29:08.766 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:08.767 "name": "raid_bdev1", 00:29:08.767 "uuid": "6c2cdec3-e518-4de5-b45f-fbfc2b1b952a", 00:29:08.767 "strip_size_kb": 0, 00:29:08.767 "state": "online", 00:29:08.767 "raid_level": "raid1", 00:29:08.767 "superblock": true, 00:29:08.767 "num_base_bdevs": 3, 00:29:08.767 "num_base_bdevs_discovered": 3, 00:29:08.767 "num_base_bdevs_operational": 3, 00:29:08.767 "base_bdevs_list": [ 00:29:08.767 { 00:29:08.767 "name": "BaseBdev1", 00:29:08.767 "uuid": "a49989de-1db2-5852-a9cc-33a46d0735e5", 00:29:08.767 "is_configured": true, 00:29:08.767 "data_offset": 2048, 00:29:08.767 "data_size": 63488 00:29:08.767 }, 00:29:08.767 { 00:29:08.767 "name": "BaseBdev2", 00:29:08.767 "uuid": "5a537147-49e9-5e1c-8719-b017dd0f9d97", 00:29:08.767 "is_configured": true, 00:29:08.767 "data_offset": 2048, 00:29:08.767 "data_size": 63488 00:29:08.767 }, 00:29:08.767 { 00:29:08.767 "name": "BaseBdev3", 00:29:08.767 "uuid": "78947ee2-1dcb-5c58-8a94-adf715815e62", 00:29:08.767 "is_configured": true, 00:29:08.767 "data_offset": 2048, 00:29:08.767 "data_size": 63488 00:29:08.767 } 00:29:08.767 ] 00:29:08.767 }' 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:08.767 18:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:09.334 [2024-12-06 18:27:40.024827] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:09.334 [2024-12-06 18:27:40.024867] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:09.334 [2024-12-06 18:27:40.027790] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:09.334 [2024-12-06 18:27:40.027844] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:09.334 [2024-12-06 18:27:40.027945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:09.334 [2024-12-06 18:27:40.027962] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:09.334 { 00:29:09.334 "results": [ 00:29:09.334 { 00:29:09.334 "job": "raid_bdev1", 00:29:09.334 "core_mask": "0x1", 00:29:09.334 "workload": "randrw", 00:29:09.334 "percentage": 50, 00:29:09.334 "status": "finished", 00:29:09.334 "queue_depth": 1, 00:29:09.334 "io_size": 131072, 00:29:09.334 "runtime": 1.376927, 00:29:09.334 "iops": 13433.537144670705, 00:29:09.334 "mibps": 1679.1921430838381, 00:29:09.334 "io_failed": 0, 00:29:09.334 "io_timeout": 0, 00:29:09.334 "avg_latency_us": 71.68020621166615, 00:29:09.334 "min_latency_us": 24.674698795180724, 00:29:09.334 "max_latency_us": 1500.2216867469879 00:29:09.334 } 00:29:09.334 ], 00:29:09.334 "core_count": 1 00:29:09.334 } 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68824 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68824 ']' 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68824 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68824 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:09.334 killing process with pid 68824 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68824' 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68824 00:29:09.334 [2024-12-06 18:27:40.077759] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:09.334 18:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68824 00:29:09.593 [2024-12-06 18:27:40.314891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.WMCS74PR0s 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:10.999 00:29:10.999 real 0m4.586s 00:29:10.999 user 0m5.406s 00:29:10.999 sys 0m0.628s 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.999 18:27:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.999 ************************************ 00:29:10.999 END TEST raid_read_error_test 00:29:10.999 ************************************ 00:29:10.999 18:27:41 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:29:10.999 18:27:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:10.999 18:27:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.999 18:27:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:10.999 ************************************ 00:29:10.999 START TEST raid_write_error_test 00:29:10.999 ************************************ 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.61XCUigEaX 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68971 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68971 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 68971 ']' 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:10.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:10.999 18:27:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:10.999 [2024-12-06 18:27:41.722313] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:10.999 [2024-12-06 18:27:41.722444] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68971 ] 00:29:10.999 [2024-12-06 18:27:41.905480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.258 [2024-12-06 18:27:42.020009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.518 [2024-12-06 18:27:42.223931] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:11.518 [2024-12-06 18:27:42.224000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.778 BaseBdev1_malloc 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.778 true 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.778 [2024-12-06 18:27:42.622678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:11.778 [2024-12-06 18:27:42.622750] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.778 [2024-12-06 18:27:42.622775] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:11.778 [2024-12-06 18:27:42.622789] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.778 [2024-12-06 18:27:42.625349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.778 [2024-12-06 18:27:42.625389] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:11.778 BaseBdev1 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.778 BaseBdev2_malloc 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.778 true 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:11.778 [2024-12-06 18:27:42.689295] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:11.778 [2024-12-06 18:27:42.689366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:11.778 [2024-12-06 18:27:42.689390] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:11.778 [2024-12-06 18:27:42.689404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:11.778 [2024-12-06 18:27:42.691921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:11.778 [2024-12-06 18:27:42.691965] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:11.778 BaseBdev2 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:11.778 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.038 BaseBdev3_malloc 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.038 true 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.038 [2024-12-06 18:27:42.769286] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:12.038 [2024-12-06 18:27:42.769346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:12.038 [2024-12-06 18:27:42.769370] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:12.038 [2024-12-06 18:27:42.769384] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:12.038 [2024-12-06 18:27:42.771814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:12.038 [2024-12-06 18:27:42.771855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:12.038 BaseBdev3 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.038 [2024-12-06 18:27:42.781332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:12.038 [2024-12-06 18:27:42.783473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:12.038 [2024-12-06 18:27:42.783552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:12.038 [2024-12-06 18:27:42.783756] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:12.038 [2024-12-06 18:27:42.783769] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:29:12.038 [2024-12-06 18:27:42.784035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:29:12.038 [2024-12-06 18:27:42.784229] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:12.038 [2024-12-06 18:27:42.784250] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:29:12.038 [2024-12-06 18:27:42.784400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:12.038 "name": "raid_bdev1", 00:29:12.038 "uuid": "017e2450-35b2-4ec3-88e4-886833069ca8", 00:29:12.038 "strip_size_kb": 0, 00:29:12.038 "state": "online", 00:29:12.038 "raid_level": "raid1", 00:29:12.038 "superblock": true, 00:29:12.038 "num_base_bdevs": 3, 00:29:12.038 "num_base_bdevs_discovered": 3, 00:29:12.038 "num_base_bdevs_operational": 3, 00:29:12.038 "base_bdevs_list": [ 00:29:12.038 { 00:29:12.038 "name": "BaseBdev1", 00:29:12.038 "uuid": "7761d970-e4c6-5bf2-9eac-6fae98185aa7", 00:29:12.038 "is_configured": true, 00:29:12.038 "data_offset": 2048, 00:29:12.038 "data_size": 63488 00:29:12.038 }, 00:29:12.038 { 00:29:12.038 "name": "BaseBdev2", 00:29:12.038 "uuid": "178d9a4c-db21-5c45-af8e-bd4638c629c8", 00:29:12.038 "is_configured": true, 00:29:12.038 "data_offset": 2048, 00:29:12.038 "data_size": 63488 00:29:12.038 }, 00:29:12.038 { 00:29:12.038 "name": "BaseBdev3", 00:29:12.038 "uuid": "ba4d31d5-1d03-54ee-87d3-bc218b9ff32e", 00:29:12.038 "is_configured": true, 00:29:12.038 "data_offset": 2048, 00:29:12.038 "data_size": 63488 00:29:12.038 } 00:29:12.038 ] 00:29:12.038 }' 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:12.038 18:27:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:12.298 18:27:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:12.298 18:27:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:12.558 [2024-12-06 18:27:43.282040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.497 [2024-12-06 18:27:44.204792] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:29:13.497 [2024-12-06 18:27:44.204845] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:13.497 [2024-12-06 18:27:44.205098] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:13.497 "name": "raid_bdev1", 00:29:13.497 "uuid": "017e2450-35b2-4ec3-88e4-886833069ca8", 00:29:13.497 "strip_size_kb": 0, 00:29:13.497 "state": "online", 00:29:13.497 "raid_level": "raid1", 00:29:13.497 "superblock": true, 00:29:13.497 "num_base_bdevs": 3, 00:29:13.497 "num_base_bdevs_discovered": 2, 00:29:13.497 "num_base_bdevs_operational": 2, 00:29:13.497 "base_bdevs_list": [ 00:29:13.497 { 00:29:13.497 "name": null, 00:29:13.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:13.497 "is_configured": false, 00:29:13.497 "data_offset": 0, 00:29:13.497 "data_size": 63488 00:29:13.497 }, 00:29:13.497 { 00:29:13.497 "name": "BaseBdev2", 00:29:13.497 "uuid": "178d9a4c-db21-5c45-af8e-bd4638c629c8", 00:29:13.497 "is_configured": true, 00:29:13.497 "data_offset": 2048, 00:29:13.497 "data_size": 63488 00:29:13.497 }, 00:29:13.497 { 00:29:13.497 "name": "BaseBdev3", 00:29:13.497 "uuid": "ba4d31d5-1d03-54ee-87d3-bc218b9ff32e", 00:29:13.497 "is_configured": true, 00:29:13.497 "data_offset": 2048, 00:29:13.497 "data_size": 63488 00:29:13.497 } 00:29:13.497 ] 00:29:13.497 }' 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:13.497 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:13.757 [2024-12-06 18:27:44.619121] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:13.757 [2024-12-06 18:27:44.619174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:13.757 [2024-12-06 18:27:44.622061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:13.757 [2024-12-06 18:27:44.622130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:13.757 [2024-12-06 18:27:44.622232] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:13.757 [2024-12-06 18:27:44.622257] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:29:13.757 { 00:29:13.757 "results": [ 00:29:13.757 { 00:29:13.757 "job": "raid_bdev1", 00:29:13.757 "core_mask": "0x1", 00:29:13.757 "workload": "randrw", 00:29:13.757 "percentage": 50, 00:29:13.757 "status": "finished", 00:29:13.757 "queue_depth": 1, 00:29:13.757 "io_size": 131072, 00:29:13.757 "runtime": 1.337246, 00:29:13.757 "iops": 14784.116011564065, 00:29:13.757 "mibps": 1848.0145014455081, 00:29:13.757 "io_failed": 0, 00:29:13.757 "io_timeout": 0, 00:29:13.757 "avg_latency_us": 64.86164611912496, 00:29:13.757 "min_latency_us": 24.983132530120482, 00:29:13.757 "max_latency_us": 1546.2811244979919 00:29:13.757 } 00:29:13.757 ], 00:29:13.757 "core_count": 1 00:29:13.757 } 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68971 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 68971 ']' 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 68971 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68971 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.757 killing process with pid 68971 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68971' 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 68971 00:29:13.757 [2024-12-06 18:27:44.660792] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:13.757 18:27:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 68971 00:29:14.018 [2024-12-06 18:27:44.902441] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.61XCUigEaX 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:29:15.394 00:29:15.394 real 0m4.508s 00:29:15.394 user 0m5.245s 00:29:15.394 sys 0m0.630s 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.394 18:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.394 ************************************ 00:29:15.394 END TEST raid_write_error_test 00:29:15.394 ************************************ 00:29:15.395 18:27:46 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:29:15.395 18:27:46 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:29:15.395 18:27:46 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:29:15.395 18:27:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:15.395 18:27:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.395 18:27:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:15.395 ************************************ 00:29:15.395 START TEST raid_state_function_test 00:29:15.395 ************************************ 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69109 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:15.395 Process raid pid: 69109 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69109' 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69109 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69109 ']' 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.395 18:27:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:15.395 [2024-12-06 18:27:46.299097] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:15.395 [2024-12-06 18:27:46.299237] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.652 [2024-12-06 18:27:46.481365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:15.652 [2024-12-06 18:27:46.598395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.910 [2024-12-06 18:27:46.813735] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:15.910 [2024-12-06 18:27:46.813771] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.477 [2024-12-06 18:27:47.153854] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:16.477 [2024-12-06 18:27:47.153929] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:16.477 [2024-12-06 18:27:47.153944] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:16.477 [2024-12-06 18:27:47.153958] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:16.477 [2024-12-06 18:27:47.153965] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:16.477 [2024-12-06 18:27:47.153977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:16.477 [2024-12-06 18:27:47.153985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:16.477 [2024-12-06 18:27:47.153997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:16.477 "name": "Existed_Raid", 00:29:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.477 "strip_size_kb": 64, 00:29:16.477 "state": "configuring", 00:29:16.477 "raid_level": "raid0", 00:29:16.477 "superblock": false, 00:29:16.477 "num_base_bdevs": 4, 00:29:16.477 "num_base_bdevs_discovered": 0, 00:29:16.477 "num_base_bdevs_operational": 4, 00:29:16.477 "base_bdevs_list": [ 00:29:16.477 { 00:29:16.477 "name": "BaseBdev1", 00:29:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.477 "is_configured": false, 00:29:16.477 "data_offset": 0, 00:29:16.477 "data_size": 0 00:29:16.477 }, 00:29:16.477 { 00:29:16.477 "name": "BaseBdev2", 00:29:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.477 "is_configured": false, 00:29:16.477 "data_offset": 0, 00:29:16.477 "data_size": 0 00:29:16.477 }, 00:29:16.477 { 00:29:16.477 "name": "BaseBdev3", 00:29:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.477 "is_configured": false, 00:29:16.477 "data_offset": 0, 00:29:16.477 "data_size": 0 00:29:16.477 }, 00:29:16.477 { 00:29:16.477 "name": "BaseBdev4", 00:29:16.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:16.477 "is_configured": false, 00:29:16.477 "data_offset": 0, 00:29:16.477 "data_size": 0 00:29:16.477 } 00:29:16.477 ] 00:29:16.477 }' 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:16.477 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.788 [2024-12-06 18:27:47.581851] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:16.788 [2024-12-06 18:27:47.581895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.788 [2024-12-06 18:27:47.593825] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:16.788 [2024-12-06 18:27:47.593870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:16.788 [2024-12-06 18:27:47.593880] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:16.788 [2024-12-06 18:27:47.593910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:16.788 [2024-12-06 18:27:47.593918] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:16.788 [2024-12-06 18:27:47.593931] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:16.788 [2024-12-06 18:27:47.593941] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:16.788 [2024-12-06 18:27:47.593953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.788 [2024-12-06 18:27:47.638444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:16.788 BaseBdev1 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:16.788 [ 00:29:16.788 { 00:29:16.788 "name": "BaseBdev1", 00:29:16.788 "aliases": [ 00:29:16.788 "c1345649-8f1c-4ef6-9a3f-5dc7d33d8ef7" 00:29:16.788 ], 00:29:16.788 "product_name": "Malloc disk", 00:29:16.788 "block_size": 512, 00:29:16.788 "num_blocks": 65536, 00:29:16.788 "uuid": "c1345649-8f1c-4ef6-9a3f-5dc7d33d8ef7", 00:29:16.788 "assigned_rate_limits": { 00:29:16.788 "rw_ios_per_sec": 0, 00:29:16.788 "rw_mbytes_per_sec": 0, 00:29:16.788 "r_mbytes_per_sec": 0, 00:29:16.788 "w_mbytes_per_sec": 0 00:29:16.788 }, 00:29:16.788 "claimed": true, 00:29:16.788 "claim_type": "exclusive_write", 00:29:16.788 "zoned": false, 00:29:16.788 "supported_io_types": { 00:29:16.788 "read": true, 00:29:16.788 "write": true, 00:29:16.788 "unmap": true, 00:29:16.788 "flush": true, 00:29:16.788 "reset": true, 00:29:16.788 "nvme_admin": false, 00:29:16.788 "nvme_io": false, 00:29:16.788 "nvme_io_md": false, 00:29:16.788 "write_zeroes": true, 00:29:16.788 "zcopy": true, 00:29:16.788 "get_zone_info": false, 00:29:16.788 "zone_management": false, 00:29:16.788 "zone_append": false, 00:29:16.788 "compare": false, 00:29:16.788 "compare_and_write": false, 00:29:16.788 "abort": true, 00:29:16.788 "seek_hole": false, 00:29:16.788 "seek_data": false, 00:29:16.788 "copy": true, 00:29:16.788 "nvme_iov_md": false 00:29:16.788 }, 00:29:16.788 "memory_domains": [ 00:29:16.788 { 00:29:16.788 "dma_device_id": "system", 00:29:16.788 "dma_device_type": 1 00:29:16.788 }, 00:29:16.788 { 00:29:16.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:16.788 "dma_device_type": 2 00:29:16.788 } 00:29:16.788 ], 00:29:16.788 "driver_specific": {} 00:29:16.788 } 00:29:16.788 ] 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.788 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.050 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.050 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.050 "name": "Existed_Raid", 00:29:17.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.050 "strip_size_kb": 64, 00:29:17.050 "state": "configuring", 00:29:17.050 "raid_level": "raid0", 00:29:17.050 "superblock": false, 00:29:17.050 "num_base_bdevs": 4, 00:29:17.050 "num_base_bdevs_discovered": 1, 00:29:17.050 "num_base_bdevs_operational": 4, 00:29:17.050 "base_bdevs_list": [ 00:29:17.050 { 00:29:17.050 "name": "BaseBdev1", 00:29:17.050 "uuid": "c1345649-8f1c-4ef6-9a3f-5dc7d33d8ef7", 00:29:17.050 "is_configured": true, 00:29:17.050 "data_offset": 0, 00:29:17.050 "data_size": 65536 00:29:17.050 }, 00:29:17.050 { 00:29:17.050 "name": "BaseBdev2", 00:29:17.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.050 "is_configured": false, 00:29:17.050 "data_offset": 0, 00:29:17.050 "data_size": 0 00:29:17.050 }, 00:29:17.050 { 00:29:17.050 "name": "BaseBdev3", 00:29:17.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.050 "is_configured": false, 00:29:17.050 "data_offset": 0, 00:29:17.050 "data_size": 0 00:29:17.050 }, 00:29:17.050 { 00:29:17.050 "name": "BaseBdev4", 00:29:17.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.050 "is_configured": false, 00:29:17.050 "data_offset": 0, 00:29:17.050 "data_size": 0 00:29:17.050 } 00:29:17.050 ] 00:29:17.050 }' 00:29:17.050 18:27:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.050 18:27:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.309 [2024-12-06 18:27:48.145855] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:17.309 [2024-12-06 18:27:48.145914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.309 [2024-12-06 18:27:48.157871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:17.309 [2024-12-06 18:27:48.159930] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:17.309 [2024-12-06 18:27:48.159974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:17.309 [2024-12-06 18:27:48.159986] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:17.309 [2024-12-06 18:27:48.160000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:17.309 [2024-12-06 18:27:48.160008] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:17.309 [2024-12-06 18:27:48.160019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.309 "name": "Existed_Raid", 00:29:17.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.309 "strip_size_kb": 64, 00:29:17.309 "state": "configuring", 00:29:17.309 "raid_level": "raid0", 00:29:17.309 "superblock": false, 00:29:17.309 "num_base_bdevs": 4, 00:29:17.309 "num_base_bdevs_discovered": 1, 00:29:17.309 "num_base_bdevs_operational": 4, 00:29:17.309 "base_bdevs_list": [ 00:29:17.309 { 00:29:17.309 "name": "BaseBdev1", 00:29:17.309 "uuid": "c1345649-8f1c-4ef6-9a3f-5dc7d33d8ef7", 00:29:17.309 "is_configured": true, 00:29:17.309 "data_offset": 0, 00:29:17.309 "data_size": 65536 00:29:17.309 }, 00:29:17.309 { 00:29:17.309 "name": "BaseBdev2", 00:29:17.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.309 "is_configured": false, 00:29:17.309 "data_offset": 0, 00:29:17.309 "data_size": 0 00:29:17.309 }, 00:29:17.309 { 00:29:17.309 "name": "BaseBdev3", 00:29:17.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.309 "is_configured": false, 00:29:17.309 "data_offset": 0, 00:29:17.309 "data_size": 0 00:29:17.309 }, 00:29:17.309 { 00:29:17.309 "name": "BaseBdev4", 00:29:17.309 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.309 "is_configured": false, 00:29:17.309 "data_offset": 0, 00:29:17.309 "data_size": 0 00:29:17.309 } 00:29:17.309 ] 00:29:17.309 }' 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.309 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.876 [2024-12-06 18:27:48.605678] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:17.876 BaseBdev2 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.876 [ 00:29:17.876 { 00:29:17.876 "name": "BaseBdev2", 00:29:17.876 "aliases": [ 00:29:17.876 "fc14fcd1-7b21-4309-8ca2-19204ed86cc6" 00:29:17.876 ], 00:29:17.876 "product_name": "Malloc disk", 00:29:17.876 "block_size": 512, 00:29:17.876 "num_blocks": 65536, 00:29:17.876 "uuid": "fc14fcd1-7b21-4309-8ca2-19204ed86cc6", 00:29:17.876 "assigned_rate_limits": { 00:29:17.876 "rw_ios_per_sec": 0, 00:29:17.876 "rw_mbytes_per_sec": 0, 00:29:17.876 "r_mbytes_per_sec": 0, 00:29:17.876 "w_mbytes_per_sec": 0 00:29:17.876 }, 00:29:17.876 "claimed": true, 00:29:17.876 "claim_type": "exclusive_write", 00:29:17.876 "zoned": false, 00:29:17.876 "supported_io_types": { 00:29:17.876 "read": true, 00:29:17.876 "write": true, 00:29:17.876 "unmap": true, 00:29:17.876 "flush": true, 00:29:17.876 "reset": true, 00:29:17.876 "nvme_admin": false, 00:29:17.876 "nvme_io": false, 00:29:17.876 "nvme_io_md": false, 00:29:17.876 "write_zeroes": true, 00:29:17.876 "zcopy": true, 00:29:17.876 "get_zone_info": false, 00:29:17.876 "zone_management": false, 00:29:17.876 "zone_append": false, 00:29:17.876 "compare": false, 00:29:17.876 "compare_and_write": false, 00:29:17.876 "abort": true, 00:29:17.876 "seek_hole": false, 00:29:17.876 "seek_data": false, 00:29:17.876 "copy": true, 00:29:17.876 "nvme_iov_md": false 00:29:17.876 }, 00:29:17.876 "memory_domains": [ 00:29:17.876 { 00:29:17.876 "dma_device_id": "system", 00:29:17.876 "dma_device_type": 1 00:29:17.876 }, 00:29:17.876 { 00:29:17.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:17.876 "dma_device_type": 2 00:29:17.876 } 00:29:17.876 ], 00:29:17.876 "driver_specific": {} 00:29:17.876 } 00:29:17.876 ] 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:17.876 "name": "Existed_Raid", 00:29:17.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.876 "strip_size_kb": 64, 00:29:17.876 "state": "configuring", 00:29:17.876 "raid_level": "raid0", 00:29:17.876 "superblock": false, 00:29:17.876 "num_base_bdevs": 4, 00:29:17.876 "num_base_bdevs_discovered": 2, 00:29:17.876 "num_base_bdevs_operational": 4, 00:29:17.876 "base_bdevs_list": [ 00:29:17.876 { 00:29:17.876 "name": "BaseBdev1", 00:29:17.876 "uuid": "c1345649-8f1c-4ef6-9a3f-5dc7d33d8ef7", 00:29:17.876 "is_configured": true, 00:29:17.876 "data_offset": 0, 00:29:17.876 "data_size": 65536 00:29:17.876 }, 00:29:17.876 { 00:29:17.876 "name": "BaseBdev2", 00:29:17.876 "uuid": "fc14fcd1-7b21-4309-8ca2-19204ed86cc6", 00:29:17.876 "is_configured": true, 00:29:17.876 "data_offset": 0, 00:29:17.876 "data_size": 65536 00:29:17.876 }, 00:29:17.876 { 00:29:17.876 "name": "BaseBdev3", 00:29:17.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.876 "is_configured": false, 00:29:17.876 "data_offset": 0, 00:29:17.876 "data_size": 0 00:29:17.876 }, 00:29:17.876 { 00:29:17.876 "name": "BaseBdev4", 00:29:17.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:17.876 "is_configured": false, 00:29:17.876 "data_offset": 0, 00:29:17.876 "data_size": 0 00:29:17.876 } 00:29:17.876 ] 00:29:17.876 }' 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:17.876 18:27:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.135 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:18.135 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.135 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.394 [2024-12-06 18:27:49.107297] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:18.394 BaseBdev3 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.394 [ 00:29:18.394 { 00:29:18.394 "name": "BaseBdev3", 00:29:18.394 "aliases": [ 00:29:18.394 "bfff6453-9dde-4dfd-80e3-aa16b4e9aff0" 00:29:18.394 ], 00:29:18.394 "product_name": "Malloc disk", 00:29:18.394 "block_size": 512, 00:29:18.394 "num_blocks": 65536, 00:29:18.394 "uuid": "bfff6453-9dde-4dfd-80e3-aa16b4e9aff0", 00:29:18.394 "assigned_rate_limits": { 00:29:18.394 "rw_ios_per_sec": 0, 00:29:18.394 "rw_mbytes_per_sec": 0, 00:29:18.394 "r_mbytes_per_sec": 0, 00:29:18.394 "w_mbytes_per_sec": 0 00:29:18.394 }, 00:29:18.394 "claimed": true, 00:29:18.394 "claim_type": "exclusive_write", 00:29:18.394 "zoned": false, 00:29:18.394 "supported_io_types": { 00:29:18.394 "read": true, 00:29:18.394 "write": true, 00:29:18.394 "unmap": true, 00:29:18.394 "flush": true, 00:29:18.394 "reset": true, 00:29:18.394 "nvme_admin": false, 00:29:18.394 "nvme_io": false, 00:29:18.394 "nvme_io_md": false, 00:29:18.394 "write_zeroes": true, 00:29:18.394 "zcopy": true, 00:29:18.394 "get_zone_info": false, 00:29:18.394 "zone_management": false, 00:29:18.394 "zone_append": false, 00:29:18.394 "compare": false, 00:29:18.394 "compare_and_write": false, 00:29:18.394 "abort": true, 00:29:18.394 "seek_hole": false, 00:29:18.394 "seek_data": false, 00:29:18.394 "copy": true, 00:29:18.394 "nvme_iov_md": false 00:29:18.394 }, 00:29:18.394 "memory_domains": [ 00:29:18.394 { 00:29:18.394 "dma_device_id": "system", 00:29:18.394 "dma_device_type": 1 00:29:18.394 }, 00:29:18.394 { 00:29:18.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:18.394 "dma_device_type": 2 00:29:18.394 } 00:29:18.394 ], 00:29:18.394 "driver_specific": {} 00:29:18.394 } 00:29:18.394 ] 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:18.394 "name": "Existed_Raid", 00:29:18.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.394 "strip_size_kb": 64, 00:29:18.394 "state": "configuring", 00:29:18.394 "raid_level": "raid0", 00:29:18.394 "superblock": false, 00:29:18.394 "num_base_bdevs": 4, 00:29:18.394 "num_base_bdevs_discovered": 3, 00:29:18.394 "num_base_bdevs_operational": 4, 00:29:18.394 "base_bdevs_list": [ 00:29:18.394 { 00:29:18.394 "name": "BaseBdev1", 00:29:18.394 "uuid": "c1345649-8f1c-4ef6-9a3f-5dc7d33d8ef7", 00:29:18.394 "is_configured": true, 00:29:18.394 "data_offset": 0, 00:29:18.394 "data_size": 65536 00:29:18.394 }, 00:29:18.394 { 00:29:18.394 "name": "BaseBdev2", 00:29:18.394 "uuid": "fc14fcd1-7b21-4309-8ca2-19204ed86cc6", 00:29:18.394 "is_configured": true, 00:29:18.394 "data_offset": 0, 00:29:18.394 "data_size": 65536 00:29:18.394 }, 00:29:18.394 { 00:29:18.394 "name": "BaseBdev3", 00:29:18.394 "uuid": "bfff6453-9dde-4dfd-80e3-aa16b4e9aff0", 00:29:18.394 "is_configured": true, 00:29:18.394 "data_offset": 0, 00:29:18.394 "data_size": 65536 00:29:18.394 }, 00:29:18.394 { 00:29:18.394 "name": "BaseBdev4", 00:29:18.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:18.394 "is_configured": false, 00:29:18.394 "data_offset": 0, 00:29:18.394 "data_size": 0 00:29:18.394 } 00:29:18.394 ] 00:29:18.394 }' 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:18.394 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.654 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:29:18.654 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.654 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.913 [2024-12-06 18:27:49.617134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:18.913 [2024-12-06 18:27:49.617220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:18.913 [2024-12-06 18:27:49.617232] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:29:18.913 [2024-12-06 18:27:49.617527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:18.913 [2024-12-06 18:27:49.617690] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:18.913 [2024-12-06 18:27:49.617704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:18.913 [2024-12-06 18:27:49.617971] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:18.913 BaseBdev4 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.913 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.913 [ 00:29:18.913 { 00:29:18.913 "name": "BaseBdev4", 00:29:18.913 "aliases": [ 00:29:18.913 "152e09dc-d453-4e47-baf9-23313cce79c1" 00:29:18.913 ], 00:29:18.913 "product_name": "Malloc disk", 00:29:18.913 "block_size": 512, 00:29:18.913 "num_blocks": 65536, 00:29:18.913 "uuid": "152e09dc-d453-4e47-baf9-23313cce79c1", 00:29:18.913 "assigned_rate_limits": { 00:29:18.913 "rw_ios_per_sec": 0, 00:29:18.913 "rw_mbytes_per_sec": 0, 00:29:18.913 "r_mbytes_per_sec": 0, 00:29:18.914 "w_mbytes_per_sec": 0 00:29:18.914 }, 00:29:18.914 "claimed": true, 00:29:18.914 "claim_type": "exclusive_write", 00:29:18.914 "zoned": false, 00:29:18.914 "supported_io_types": { 00:29:18.914 "read": true, 00:29:18.914 "write": true, 00:29:18.914 "unmap": true, 00:29:18.914 "flush": true, 00:29:18.914 "reset": true, 00:29:18.914 "nvme_admin": false, 00:29:18.914 "nvme_io": false, 00:29:18.914 "nvme_io_md": false, 00:29:18.914 "write_zeroes": true, 00:29:18.914 "zcopy": true, 00:29:18.914 "get_zone_info": false, 00:29:18.914 "zone_management": false, 00:29:18.914 "zone_append": false, 00:29:18.914 "compare": false, 00:29:18.914 "compare_and_write": false, 00:29:18.914 "abort": true, 00:29:18.914 "seek_hole": false, 00:29:18.914 "seek_data": false, 00:29:18.914 "copy": true, 00:29:18.914 "nvme_iov_md": false 00:29:18.914 }, 00:29:18.914 "memory_domains": [ 00:29:18.914 { 00:29:18.914 "dma_device_id": "system", 00:29:18.914 "dma_device_type": 1 00:29:18.914 }, 00:29:18.914 { 00:29:18.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:18.914 "dma_device_type": 2 00:29:18.914 } 00:29:18.914 ], 00:29:18.914 "driver_specific": {} 00:29:18.914 } 00:29:18.914 ] 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:18.914 "name": "Existed_Raid", 00:29:18.914 "uuid": "7e45284e-d9ef-4242-a341-da2e9cd84075", 00:29:18.914 "strip_size_kb": 64, 00:29:18.914 "state": "online", 00:29:18.914 "raid_level": "raid0", 00:29:18.914 "superblock": false, 00:29:18.914 "num_base_bdevs": 4, 00:29:18.914 "num_base_bdevs_discovered": 4, 00:29:18.914 "num_base_bdevs_operational": 4, 00:29:18.914 "base_bdevs_list": [ 00:29:18.914 { 00:29:18.914 "name": "BaseBdev1", 00:29:18.914 "uuid": "c1345649-8f1c-4ef6-9a3f-5dc7d33d8ef7", 00:29:18.914 "is_configured": true, 00:29:18.914 "data_offset": 0, 00:29:18.914 "data_size": 65536 00:29:18.914 }, 00:29:18.914 { 00:29:18.914 "name": "BaseBdev2", 00:29:18.914 "uuid": "fc14fcd1-7b21-4309-8ca2-19204ed86cc6", 00:29:18.914 "is_configured": true, 00:29:18.914 "data_offset": 0, 00:29:18.914 "data_size": 65536 00:29:18.914 }, 00:29:18.914 { 00:29:18.914 "name": "BaseBdev3", 00:29:18.914 "uuid": "bfff6453-9dde-4dfd-80e3-aa16b4e9aff0", 00:29:18.914 "is_configured": true, 00:29:18.914 "data_offset": 0, 00:29:18.914 "data_size": 65536 00:29:18.914 }, 00:29:18.914 { 00:29:18.914 "name": "BaseBdev4", 00:29:18.914 "uuid": "152e09dc-d453-4e47-baf9-23313cce79c1", 00:29:18.914 "is_configured": true, 00:29:18.914 "data_offset": 0, 00:29:18.914 "data_size": 65536 00:29:18.914 } 00:29:18.914 ] 00:29:18.914 }' 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:18.914 18:27:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.173 [2024-12-06 18:27:50.084899] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:19.173 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.432 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:19.432 "name": "Existed_Raid", 00:29:19.432 "aliases": [ 00:29:19.432 "7e45284e-d9ef-4242-a341-da2e9cd84075" 00:29:19.432 ], 00:29:19.432 "product_name": "Raid Volume", 00:29:19.432 "block_size": 512, 00:29:19.432 "num_blocks": 262144, 00:29:19.432 "uuid": "7e45284e-d9ef-4242-a341-da2e9cd84075", 00:29:19.432 "assigned_rate_limits": { 00:29:19.432 "rw_ios_per_sec": 0, 00:29:19.432 "rw_mbytes_per_sec": 0, 00:29:19.432 "r_mbytes_per_sec": 0, 00:29:19.432 "w_mbytes_per_sec": 0 00:29:19.432 }, 00:29:19.432 "claimed": false, 00:29:19.432 "zoned": false, 00:29:19.432 "supported_io_types": { 00:29:19.432 "read": true, 00:29:19.432 "write": true, 00:29:19.432 "unmap": true, 00:29:19.432 "flush": true, 00:29:19.432 "reset": true, 00:29:19.432 "nvme_admin": false, 00:29:19.432 "nvme_io": false, 00:29:19.432 "nvme_io_md": false, 00:29:19.432 "write_zeroes": true, 00:29:19.432 "zcopy": false, 00:29:19.432 "get_zone_info": false, 00:29:19.432 "zone_management": false, 00:29:19.432 "zone_append": false, 00:29:19.432 "compare": false, 00:29:19.432 "compare_and_write": false, 00:29:19.432 "abort": false, 00:29:19.432 "seek_hole": false, 00:29:19.432 "seek_data": false, 00:29:19.432 "copy": false, 00:29:19.432 "nvme_iov_md": false 00:29:19.432 }, 00:29:19.432 "memory_domains": [ 00:29:19.432 { 00:29:19.432 "dma_device_id": "system", 00:29:19.432 "dma_device_type": 1 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.432 "dma_device_type": 2 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "dma_device_id": "system", 00:29:19.432 "dma_device_type": 1 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.432 "dma_device_type": 2 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "dma_device_id": "system", 00:29:19.432 "dma_device_type": 1 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.432 "dma_device_type": 2 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "dma_device_id": "system", 00:29:19.432 "dma_device_type": 1 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:19.432 "dma_device_type": 2 00:29:19.432 } 00:29:19.432 ], 00:29:19.432 "driver_specific": { 00:29:19.432 "raid": { 00:29:19.432 "uuid": "7e45284e-d9ef-4242-a341-da2e9cd84075", 00:29:19.432 "strip_size_kb": 64, 00:29:19.432 "state": "online", 00:29:19.432 "raid_level": "raid0", 00:29:19.432 "superblock": false, 00:29:19.432 "num_base_bdevs": 4, 00:29:19.432 "num_base_bdevs_discovered": 4, 00:29:19.432 "num_base_bdevs_operational": 4, 00:29:19.432 "base_bdevs_list": [ 00:29:19.432 { 00:29:19.432 "name": "BaseBdev1", 00:29:19.432 "uuid": "c1345649-8f1c-4ef6-9a3f-5dc7d33d8ef7", 00:29:19.432 "is_configured": true, 00:29:19.432 "data_offset": 0, 00:29:19.432 "data_size": 65536 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "name": "BaseBdev2", 00:29:19.432 "uuid": "fc14fcd1-7b21-4309-8ca2-19204ed86cc6", 00:29:19.432 "is_configured": true, 00:29:19.432 "data_offset": 0, 00:29:19.432 "data_size": 65536 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "name": "BaseBdev3", 00:29:19.432 "uuid": "bfff6453-9dde-4dfd-80e3-aa16b4e9aff0", 00:29:19.432 "is_configured": true, 00:29:19.432 "data_offset": 0, 00:29:19.432 "data_size": 65536 00:29:19.432 }, 00:29:19.432 { 00:29:19.432 "name": "BaseBdev4", 00:29:19.432 "uuid": "152e09dc-d453-4e47-baf9-23313cce79c1", 00:29:19.432 "is_configured": true, 00:29:19.432 "data_offset": 0, 00:29:19.432 "data_size": 65536 00:29:19.432 } 00:29:19.432 ] 00:29:19.432 } 00:29:19.433 } 00:29:19.433 }' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:19.433 BaseBdev2 00:29:19.433 BaseBdev3 00:29:19.433 BaseBdev4' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.433 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.433 [2024-12-06 18:27:50.364313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:19.433 [2024-12-06 18:27:50.364450] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:19.433 [2024-12-06 18:27:50.364520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:19.691 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.692 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.692 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:19.692 "name": "Existed_Raid", 00:29:19.692 "uuid": "7e45284e-d9ef-4242-a341-da2e9cd84075", 00:29:19.692 "strip_size_kb": 64, 00:29:19.692 "state": "offline", 00:29:19.692 "raid_level": "raid0", 00:29:19.692 "superblock": false, 00:29:19.692 "num_base_bdevs": 4, 00:29:19.692 "num_base_bdevs_discovered": 3, 00:29:19.692 "num_base_bdevs_operational": 3, 00:29:19.692 "base_bdevs_list": [ 00:29:19.692 { 00:29:19.692 "name": null, 00:29:19.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:19.692 "is_configured": false, 00:29:19.692 "data_offset": 0, 00:29:19.692 "data_size": 65536 00:29:19.692 }, 00:29:19.692 { 00:29:19.692 "name": "BaseBdev2", 00:29:19.692 "uuid": "fc14fcd1-7b21-4309-8ca2-19204ed86cc6", 00:29:19.692 "is_configured": true, 00:29:19.692 "data_offset": 0, 00:29:19.692 "data_size": 65536 00:29:19.692 }, 00:29:19.692 { 00:29:19.692 "name": "BaseBdev3", 00:29:19.692 "uuid": "bfff6453-9dde-4dfd-80e3-aa16b4e9aff0", 00:29:19.692 "is_configured": true, 00:29:19.692 "data_offset": 0, 00:29:19.692 "data_size": 65536 00:29:19.692 }, 00:29:19.692 { 00:29:19.692 "name": "BaseBdev4", 00:29:19.692 "uuid": "152e09dc-d453-4e47-baf9-23313cce79c1", 00:29:19.692 "is_configured": true, 00:29:19.692 "data_offset": 0, 00:29:19.692 "data_size": 65536 00:29:19.692 } 00:29:19.692 ] 00:29:19.692 }' 00:29:19.692 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:19.692 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:19.950 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:19.950 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:19.950 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:19.950 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:19.950 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.950 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.209 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.209 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:20.209 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:20.209 18:27:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:20.209 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.209 18:27:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.209 [2024-12-06 18:27:50.933877] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.209 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.209 [2024-12-06 18:27:51.086741] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.468 [2024-12-06 18:27:51.239707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:29:20.468 [2024-12-06 18:27:51.239758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:20.468 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.469 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.729 BaseBdev2 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.729 [ 00:29:20.729 { 00:29:20.729 "name": "BaseBdev2", 00:29:20.729 "aliases": [ 00:29:20.729 "6290338c-176b-49b1-8b2f-4d5c150dcc0f" 00:29:20.729 ], 00:29:20.729 "product_name": "Malloc disk", 00:29:20.729 "block_size": 512, 00:29:20.729 "num_blocks": 65536, 00:29:20.729 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:20.729 "assigned_rate_limits": { 00:29:20.729 "rw_ios_per_sec": 0, 00:29:20.729 "rw_mbytes_per_sec": 0, 00:29:20.729 "r_mbytes_per_sec": 0, 00:29:20.729 "w_mbytes_per_sec": 0 00:29:20.729 }, 00:29:20.729 "claimed": false, 00:29:20.729 "zoned": false, 00:29:20.729 "supported_io_types": { 00:29:20.729 "read": true, 00:29:20.729 "write": true, 00:29:20.729 "unmap": true, 00:29:20.729 "flush": true, 00:29:20.729 "reset": true, 00:29:20.729 "nvme_admin": false, 00:29:20.729 "nvme_io": false, 00:29:20.729 "nvme_io_md": false, 00:29:20.729 "write_zeroes": true, 00:29:20.729 "zcopy": true, 00:29:20.729 "get_zone_info": false, 00:29:20.729 "zone_management": false, 00:29:20.729 "zone_append": false, 00:29:20.729 "compare": false, 00:29:20.729 "compare_and_write": false, 00:29:20.729 "abort": true, 00:29:20.729 "seek_hole": false, 00:29:20.729 "seek_data": false, 00:29:20.729 "copy": true, 00:29:20.729 "nvme_iov_md": false 00:29:20.729 }, 00:29:20.729 "memory_domains": [ 00:29:20.729 { 00:29:20.729 "dma_device_id": "system", 00:29:20.729 "dma_device_type": 1 00:29:20.729 }, 00:29:20.729 { 00:29:20.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.729 "dma_device_type": 2 00:29:20.729 } 00:29:20.729 ], 00:29:20.729 "driver_specific": {} 00:29:20.729 } 00:29:20.729 ] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.729 BaseBdev3 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.729 [ 00:29:20.729 { 00:29:20.729 "name": "BaseBdev3", 00:29:20.729 "aliases": [ 00:29:20.729 "f20237d6-7e5d-4987-aa57-4886f83a6849" 00:29:20.729 ], 00:29:20.729 "product_name": "Malloc disk", 00:29:20.729 "block_size": 512, 00:29:20.729 "num_blocks": 65536, 00:29:20.729 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:20.729 "assigned_rate_limits": { 00:29:20.729 "rw_ios_per_sec": 0, 00:29:20.729 "rw_mbytes_per_sec": 0, 00:29:20.729 "r_mbytes_per_sec": 0, 00:29:20.729 "w_mbytes_per_sec": 0 00:29:20.729 }, 00:29:20.729 "claimed": false, 00:29:20.729 "zoned": false, 00:29:20.729 "supported_io_types": { 00:29:20.729 "read": true, 00:29:20.729 "write": true, 00:29:20.729 "unmap": true, 00:29:20.729 "flush": true, 00:29:20.729 "reset": true, 00:29:20.729 "nvme_admin": false, 00:29:20.729 "nvme_io": false, 00:29:20.729 "nvme_io_md": false, 00:29:20.729 "write_zeroes": true, 00:29:20.729 "zcopy": true, 00:29:20.729 "get_zone_info": false, 00:29:20.729 "zone_management": false, 00:29:20.729 "zone_append": false, 00:29:20.729 "compare": false, 00:29:20.729 "compare_and_write": false, 00:29:20.729 "abort": true, 00:29:20.729 "seek_hole": false, 00:29:20.729 "seek_data": false, 00:29:20.729 "copy": true, 00:29:20.729 "nvme_iov_md": false 00:29:20.729 }, 00:29:20.729 "memory_domains": [ 00:29:20.729 { 00:29:20.729 "dma_device_id": "system", 00:29:20.729 "dma_device_type": 1 00:29:20.729 }, 00:29:20.729 { 00:29:20.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.729 "dma_device_type": 2 00:29:20.729 } 00:29:20.729 ], 00:29:20.729 "driver_specific": {} 00:29:20.729 } 00:29:20.729 ] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.729 BaseBdev4 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:29:20.729 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.730 [ 00:29:20.730 { 00:29:20.730 "name": "BaseBdev4", 00:29:20.730 "aliases": [ 00:29:20.730 "14300a54-327d-410a-8a38-e7136ec6b315" 00:29:20.730 ], 00:29:20.730 "product_name": "Malloc disk", 00:29:20.730 "block_size": 512, 00:29:20.730 "num_blocks": 65536, 00:29:20.730 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:20.730 "assigned_rate_limits": { 00:29:20.730 "rw_ios_per_sec": 0, 00:29:20.730 "rw_mbytes_per_sec": 0, 00:29:20.730 "r_mbytes_per_sec": 0, 00:29:20.730 "w_mbytes_per_sec": 0 00:29:20.730 }, 00:29:20.730 "claimed": false, 00:29:20.730 "zoned": false, 00:29:20.730 "supported_io_types": { 00:29:20.730 "read": true, 00:29:20.730 "write": true, 00:29:20.730 "unmap": true, 00:29:20.730 "flush": true, 00:29:20.730 "reset": true, 00:29:20.730 "nvme_admin": false, 00:29:20.730 "nvme_io": false, 00:29:20.730 "nvme_io_md": false, 00:29:20.730 "write_zeroes": true, 00:29:20.730 "zcopy": true, 00:29:20.730 "get_zone_info": false, 00:29:20.730 "zone_management": false, 00:29:20.730 "zone_append": false, 00:29:20.730 "compare": false, 00:29:20.730 "compare_and_write": false, 00:29:20.730 "abort": true, 00:29:20.730 "seek_hole": false, 00:29:20.730 "seek_data": false, 00:29:20.730 "copy": true, 00:29:20.730 "nvme_iov_md": false 00:29:20.730 }, 00:29:20.730 "memory_domains": [ 00:29:20.730 { 00:29:20.730 "dma_device_id": "system", 00:29:20.730 "dma_device_type": 1 00:29:20.730 }, 00:29:20.730 { 00:29:20.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:20.730 "dma_device_type": 2 00:29:20.730 } 00:29:20.730 ], 00:29:20.730 "driver_specific": {} 00:29:20.730 } 00:29:20.730 ] 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.730 [2024-12-06 18:27:51.661811] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:20.730 [2024-12-06 18:27:51.661984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:20.730 [2024-12-06 18:27:51.662106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:20.730 [2024-12-06 18:27:51.664331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:20.730 [2024-12-06 18:27:51.664536] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.730 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:20.989 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.989 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:20.989 "name": "Existed_Raid", 00:29:20.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.989 "strip_size_kb": 64, 00:29:20.989 "state": "configuring", 00:29:20.989 "raid_level": "raid0", 00:29:20.989 "superblock": false, 00:29:20.989 "num_base_bdevs": 4, 00:29:20.989 "num_base_bdevs_discovered": 3, 00:29:20.989 "num_base_bdevs_operational": 4, 00:29:20.989 "base_bdevs_list": [ 00:29:20.989 { 00:29:20.989 "name": "BaseBdev1", 00:29:20.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:20.989 "is_configured": false, 00:29:20.989 "data_offset": 0, 00:29:20.989 "data_size": 0 00:29:20.989 }, 00:29:20.989 { 00:29:20.989 "name": "BaseBdev2", 00:29:20.989 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:20.989 "is_configured": true, 00:29:20.990 "data_offset": 0, 00:29:20.990 "data_size": 65536 00:29:20.990 }, 00:29:20.990 { 00:29:20.990 "name": "BaseBdev3", 00:29:20.990 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:20.990 "is_configured": true, 00:29:20.990 "data_offset": 0, 00:29:20.990 "data_size": 65536 00:29:20.990 }, 00:29:20.990 { 00:29:20.990 "name": "BaseBdev4", 00:29:20.990 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:20.990 "is_configured": true, 00:29:20.990 "data_offset": 0, 00:29:20.990 "data_size": 65536 00:29:20.990 } 00:29:20.990 ] 00:29:20.990 }' 00:29:20.990 18:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:20.990 18:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.264 [2024-12-06 18:27:52.073921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:21.264 "name": "Existed_Raid", 00:29:21.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.264 "strip_size_kb": 64, 00:29:21.264 "state": "configuring", 00:29:21.264 "raid_level": "raid0", 00:29:21.264 "superblock": false, 00:29:21.264 "num_base_bdevs": 4, 00:29:21.264 "num_base_bdevs_discovered": 2, 00:29:21.264 "num_base_bdevs_operational": 4, 00:29:21.264 "base_bdevs_list": [ 00:29:21.264 { 00:29:21.264 "name": "BaseBdev1", 00:29:21.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.264 "is_configured": false, 00:29:21.264 "data_offset": 0, 00:29:21.264 "data_size": 0 00:29:21.264 }, 00:29:21.264 { 00:29:21.264 "name": null, 00:29:21.264 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:21.264 "is_configured": false, 00:29:21.264 "data_offset": 0, 00:29:21.264 "data_size": 65536 00:29:21.264 }, 00:29:21.264 { 00:29:21.264 "name": "BaseBdev3", 00:29:21.264 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:21.264 "is_configured": true, 00:29:21.264 "data_offset": 0, 00:29:21.264 "data_size": 65536 00:29:21.264 }, 00:29:21.264 { 00:29:21.264 "name": "BaseBdev4", 00:29:21.264 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:21.264 "is_configured": true, 00:29:21.264 "data_offset": 0, 00:29:21.264 "data_size": 65536 00:29:21.264 } 00:29:21.264 ] 00:29:21.264 }' 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:21.264 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.833 [2024-12-06 18:27:52.604324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:21.833 BaseBdev1 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.833 [ 00:29:21.833 { 00:29:21.833 "name": "BaseBdev1", 00:29:21.833 "aliases": [ 00:29:21.833 "782aef6b-f959-42d6-999f-0cbf4b9f9e67" 00:29:21.833 ], 00:29:21.833 "product_name": "Malloc disk", 00:29:21.833 "block_size": 512, 00:29:21.833 "num_blocks": 65536, 00:29:21.833 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:21.833 "assigned_rate_limits": { 00:29:21.833 "rw_ios_per_sec": 0, 00:29:21.833 "rw_mbytes_per_sec": 0, 00:29:21.833 "r_mbytes_per_sec": 0, 00:29:21.833 "w_mbytes_per_sec": 0 00:29:21.833 }, 00:29:21.833 "claimed": true, 00:29:21.833 "claim_type": "exclusive_write", 00:29:21.833 "zoned": false, 00:29:21.833 "supported_io_types": { 00:29:21.833 "read": true, 00:29:21.833 "write": true, 00:29:21.833 "unmap": true, 00:29:21.833 "flush": true, 00:29:21.833 "reset": true, 00:29:21.833 "nvme_admin": false, 00:29:21.833 "nvme_io": false, 00:29:21.833 "nvme_io_md": false, 00:29:21.833 "write_zeroes": true, 00:29:21.833 "zcopy": true, 00:29:21.833 "get_zone_info": false, 00:29:21.833 "zone_management": false, 00:29:21.833 "zone_append": false, 00:29:21.833 "compare": false, 00:29:21.833 "compare_and_write": false, 00:29:21.833 "abort": true, 00:29:21.833 "seek_hole": false, 00:29:21.833 "seek_data": false, 00:29:21.833 "copy": true, 00:29:21.833 "nvme_iov_md": false 00:29:21.833 }, 00:29:21.833 "memory_domains": [ 00:29:21.833 { 00:29:21.833 "dma_device_id": "system", 00:29:21.833 "dma_device_type": 1 00:29:21.833 }, 00:29:21.833 { 00:29:21.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:21.833 "dma_device_type": 2 00:29:21.833 } 00:29:21.833 ], 00:29:21.833 "driver_specific": {} 00:29:21.833 } 00:29:21.833 ] 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:21.833 "name": "Existed_Raid", 00:29:21.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:21.833 "strip_size_kb": 64, 00:29:21.833 "state": "configuring", 00:29:21.833 "raid_level": "raid0", 00:29:21.833 "superblock": false, 00:29:21.833 "num_base_bdevs": 4, 00:29:21.833 "num_base_bdevs_discovered": 3, 00:29:21.833 "num_base_bdevs_operational": 4, 00:29:21.833 "base_bdevs_list": [ 00:29:21.833 { 00:29:21.833 "name": "BaseBdev1", 00:29:21.833 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:21.833 "is_configured": true, 00:29:21.833 "data_offset": 0, 00:29:21.833 "data_size": 65536 00:29:21.833 }, 00:29:21.833 { 00:29:21.833 "name": null, 00:29:21.833 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:21.833 "is_configured": false, 00:29:21.833 "data_offset": 0, 00:29:21.833 "data_size": 65536 00:29:21.833 }, 00:29:21.833 { 00:29:21.833 "name": "BaseBdev3", 00:29:21.833 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:21.833 "is_configured": true, 00:29:21.833 "data_offset": 0, 00:29:21.833 "data_size": 65536 00:29:21.833 }, 00:29:21.833 { 00:29:21.833 "name": "BaseBdev4", 00:29:21.833 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:21.833 "is_configured": true, 00:29:21.833 "data_offset": 0, 00:29:21.833 "data_size": 65536 00:29:21.833 } 00:29:21.833 ] 00:29:21.833 }' 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:21.833 18:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.401 [2024-12-06 18:27:53.127688] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.401 "name": "Existed_Raid", 00:29:22.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.401 "strip_size_kb": 64, 00:29:22.401 "state": "configuring", 00:29:22.401 "raid_level": "raid0", 00:29:22.401 "superblock": false, 00:29:22.401 "num_base_bdevs": 4, 00:29:22.401 "num_base_bdevs_discovered": 2, 00:29:22.401 "num_base_bdevs_operational": 4, 00:29:22.401 "base_bdevs_list": [ 00:29:22.401 { 00:29:22.401 "name": "BaseBdev1", 00:29:22.401 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:22.401 "is_configured": true, 00:29:22.401 "data_offset": 0, 00:29:22.401 "data_size": 65536 00:29:22.401 }, 00:29:22.401 { 00:29:22.401 "name": null, 00:29:22.401 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:22.401 "is_configured": false, 00:29:22.401 "data_offset": 0, 00:29:22.401 "data_size": 65536 00:29:22.401 }, 00:29:22.401 { 00:29:22.401 "name": null, 00:29:22.401 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:22.401 "is_configured": false, 00:29:22.401 "data_offset": 0, 00:29:22.401 "data_size": 65536 00:29:22.401 }, 00:29:22.401 { 00:29:22.401 "name": "BaseBdev4", 00:29:22.401 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:22.401 "is_configured": true, 00:29:22.401 "data_offset": 0, 00:29:22.401 "data_size": 65536 00:29:22.401 } 00:29:22.401 ] 00:29:22.401 }' 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.401 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.660 [2024-12-06 18:27:53.571097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.660 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:22.920 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.920 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:22.920 "name": "Existed_Raid", 00:29:22.920 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:22.920 "strip_size_kb": 64, 00:29:22.920 "state": "configuring", 00:29:22.920 "raid_level": "raid0", 00:29:22.920 "superblock": false, 00:29:22.920 "num_base_bdevs": 4, 00:29:22.920 "num_base_bdevs_discovered": 3, 00:29:22.920 "num_base_bdevs_operational": 4, 00:29:22.920 "base_bdevs_list": [ 00:29:22.920 { 00:29:22.920 "name": "BaseBdev1", 00:29:22.920 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:22.920 "is_configured": true, 00:29:22.920 "data_offset": 0, 00:29:22.920 "data_size": 65536 00:29:22.920 }, 00:29:22.920 { 00:29:22.920 "name": null, 00:29:22.920 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:22.920 "is_configured": false, 00:29:22.920 "data_offset": 0, 00:29:22.920 "data_size": 65536 00:29:22.920 }, 00:29:22.920 { 00:29:22.920 "name": "BaseBdev3", 00:29:22.920 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:22.920 "is_configured": true, 00:29:22.920 "data_offset": 0, 00:29:22.920 "data_size": 65536 00:29:22.920 }, 00:29:22.920 { 00:29:22.920 "name": "BaseBdev4", 00:29:22.920 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:22.920 "is_configured": true, 00:29:22.920 "data_offset": 0, 00:29:22.920 "data_size": 65536 00:29:22.920 } 00:29:22.920 ] 00:29:22.920 }' 00:29:22.920 18:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:22.920 18:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.180 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.180 [2024-12-06 18:27:54.054443] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.513 "name": "Existed_Raid", 00:29:23.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.513 "strip_size_kb": 64, 00:29:23.513 "state": "configuring", 00:29:23.513 "raid_level": "raid0", 00:29:23.513 "superblock": false, 00:29:23.513 "num_base_bdevs": 4, 00:29:23.513 "num_base_bdevs_discovered": 2, 00:29:23.513 "num_base_bdevs_operational": 4, 00:29:23.513 "base_bdevs_list": [ 00:29:23.513 { 00:29:23.513 "name": null, 00:29:23.513 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:23.513 "is_configured": false, 00:29:23.513 "data_offset": 0, 00:29:23.513 "data_size": 65536 00:29:23.513 }, 00:29:23.513 { 00:29:23.513 "name": null, 00:29:23.513 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:23.513 "is_configured": false, 00:29:23.513 "data_offset": 0, 00:29:23.513 "data_size": 65536 00:29:23.513 }, 00:29:23.513 { 00:29:23.513 "name": "BaseBdev3", 00:29:23.513 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:23.513 "is_configured": true, 00:29:23.513 "data_offset": 0, 00:29:23.513 "data_size": 65536 00:29:23.513 }, 00:29:23.513 { 00:29:23.513 "name": "BaseBdev4", 00:29:23.513 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:23.513 "is_configured": true, 00:29:23.513 "data_offset": 0, 00:29:23.513 "data_size": 65536 00:29:23.513 } 00:29:23.513 ] 00:29:23.513 }' 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.513 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.804 [2024-12-06 18:27:54.606798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:23.804 "name": "Existed_Raid", 00:29:23.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:23.804 "strip_size_kb": 64, 00:29:23.804 "state": "configuring", 00:29:23.804 "raid_level": "raid0", 00:29:23.804 "superblock": false, 00:29:23.804 "num_base_bdevs": 4, 00:29:23.804 "num_base_bdevs_discovered": 3, 00:29:23.804 "num_base_bdevs_operational": 4, 00:29:23.804 "base_bdevs_list": [ 00:29:23.804 { 00:29:23.804 "name": null, 00:29:23.804 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:23.804 "is_configured": false, 00:29:23.804 "data_offset": 0, 00:29:23.804 "data_size": 65536 00:29:23.804 }, 00:29:23.804 { 00:29:23.804 "name": "BaseBdev2", 00:29:23.804 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:23.804 "is_configured": true, 00:29:23.804 "data_offset": 0, 00:29:23.804 "data_size": 65536 00:29:23.804 }, 00:29:23.804 { 00:29:23.804 "name": "BaseBdev3", 00:29:23.804 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:23.804 "is_configured": true, 00:29:23.804 "data_offset": 0, 00:29:23.804 "data_size": 65536 00:29:23.804 }, 00:29:23.804 { 00:29:23.804 "name": "BaseBdev4", 00:29:23.804 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:23.804 "is_configured": true, 00:29:23.804 "data_offset": 0, 00:29:23.804 "data_size": 65536 00:29:23.804 } 00:29:23.804 ] 00:29:23.804 }' 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:23.804 18:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 782aef6b-f959-42d6-999f-0cbf4b9f9e67 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 [2024-12-06 18:27:55.144021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:24.374 [2024-12-06 18:27:55.144071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:24.374 [2024-12-06 18:27:55.144081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:29:24.374 [2024-12-06 18:27:55.144397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:29:24.374 [2024-12-06 18:27:55.144540] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:24.374 [2024-12-06 18:27:55.144558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:29:24.374 [2024-12-06 18:27:55.144812] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:24.374 NewBaseBdev 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 [ 00:29:24.374 { 00:29:24.374 "name": "NewBaseBdev", 00:29:24.374 "aliases": [ 00:29:24.374 "782aef6b-f959-42d6-999f-0cbf4b9f9e67" 00:29:24.374 ], 00:29:24.374 "product_name": "Malloc disk", 00:29:24.374 "block_size": 512, 00:29:24.374 "num_blocks": 65536, 00:29:24.374 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:24.374 "assigned_rate_limits": { 00:29:24.374 "rw_ios_per_sec": 0, 00:29:24.374 "rw_mbytes_per_sec": 0, 00:29:24.374 "r_mbytes_per_sec": 0, 00:29:24.374 "w_mbytes_per_sec": 0 00:29:24.374 }, 00:29:24.374 "claimed": true, 00:29:24.374 "claim_type": "exclusive_write", 00:29:24.374 "zoned": false, 00:29:24.374 "supported_io_types": { 00:29:24.374 "read": true, 00:29:24.374 "write": true, 00:29:24.374 "unmap": true, 00:29:24.374 "flush": true, 00:29:24.374 "reset": true, 00:29:24.374 "nvme_admin": false, 00:29:24.374 "nvme_io": false, 00:29:24.374 "nvme_io_md": false, 00:29:24.374 "write_zeroes": true, 00:29:24.374 "zcopy": true, 00:29:24.374 "get_zone_info": false, 00:29:24.374 "zone_management": false, 00:29:24.374 "zone_append": false, 00:29:24.374 "compare": false, 00:29:24.374 "compare_and_write": false, 00:29:24.374 "abort": true, 00:29:24.374 "seek_hole": false, 00:29:24.374 "seek_data": false, 00:29:24.374 "copy": true, 00:29:24.374 "nvme_iov_md": false 00:29:24.374 }, 00:29:24.374 "memory_domains": [ 00:29:24.374 { 00:29:24.374 "dma_device_id": "system", 00:29:24.374 "dma_device_type": 1 00:29:24.374 }, 00:29:24.374 { 00:29:24.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.374 "dma_device_type": 2 00:29:24.374 } 00:29:24.374 ], 00:29:24.374 "driver_specific": {} 00:29:24.374 } 00:29:24.374 ] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:24.374 "name": "Existed_Raid", 00:29:24.374 "uuid": "c6214d55-b981-4537-99ca-82af7824d90c", 00:29:24.374 "strip_size_kb": 64, 00:29:24.374 "state": "online", 00:29:24.374 "raid_level": "raid0", 00:29:24.374 "superblock": false, 00:29:24.374 "num_base_bdevs": 4, 00:29:24.374 "num_base_bdevs_discovered": 4, 00:29:24.374 "num_base_bdevs_operational": 4, 00:29:24.374 "base_bdevs_list": [ 00:29:24.374 { 00:29:24.374 "name": "NewBaseBdev", 00:29:24.374 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:24.374 "is_configured": true, 00:29:24.374 "data_offset": 0, 00:29:24.374 "data_size": 65536 00:29:24.374 }, 00:29:24.374 { 00:29:24.374 "name": "BaseBdev2", 00:29:24.374 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:24.374 "is_configured": true, 00:29:24.374 "data_offset": 0, 00:29:24.374 "data_size": 65536 00:29:24.374 }, 00:29:24.374 { 00:29:24.374 "name": "BaseBdev3", 00:29:24.374 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:24.374 "is_configured": true, 00:29:24.374 "data_offset": 0, 00:29:24.374 "data_size": 65536 00:29:24.374 }, 00:29:24.374 { 00:29:24.374 "name": "BaseBdev4", 00:29:24.374 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:24.374 "is_configured": true, 00:29:24.374 "data_offset": 0, 00:29:24.374 "data_size": 65536 00:29:24.374 } 00:29:24.374 ] 00:29:24.374 }' 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:24.374 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:24.941 [2024-12-06 18:27:55.604042] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.941 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:24.941 "name": "Existed_Raid", 00:29:24.941 "aliases": [ 00:29:24.941 "c6214d55-b981-4537-99ca-82af7824d90c" 00:29:24.941 ], 00:29:24.941 "product_name": "Raid Volume", 00:29:24.941 "block_size": 512, 00:29:24.941 "num_blocks": 262144, 00:29:24.941 "uuid": "c6214d55-b981-4537-99ca-82af7824d90c", 00:29:24.941 "assigned_rate_limits": { 00:29:24.941 "rw_ios_per_sec": 0, 00:29:24.941 "rw_mbytes_per_sec": 0, 00:29:24.942 "r_mbytes_per_sec": 0, 00:29:24.942 "w_mbytes_per_sec": 0 00:29:24.942 }, 00:29:24.942 "claimed": false, 00:29:24.942 "zoned": false, 00:29:24.942 "supported_io_types": { 00:29:24.942 "read": true, 00:29:24.942 "write": true, 00:29:24.942 "unmap": true, 00:29:24.942 "flush": true, 00:29:24.942 "reset": true, 00:29:24.942 "nvme_admin": false, 00:29:24.942 "nvme_io": false, 00:29:24.942 "nvme_io_md": false, 00:29:24.942 "write_zeroes": true, 00:29:24.942 "zcopy": false, 00:29:24.942 "get_zone_info": false, 00:29:24.942 "zone_management": false, 00:29:24.942 "zone_append": false, 00:29:24.942 "compare": false, 00:29:24.942 "compare_and_write": false, 00:29:24.942 "abort": false, 00:29:24.942 "seek_hole": false, 00:29:24.942 "seek_data": false, 00:29:24.942 "copy": false, 00:29:24.942 "nvme_iov_md": false 00:29:24.942 }, 00:29:24.942 "memory_domains": [ 00:29:24.942 { 00:29:24.942 "dma_device_id": "system", 00:29:24.942 "dma_device_type": 1 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.942 "dma_device_type": 2 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "dma_device_id": "system", 00:29:24.942 "dma_device_type": 1 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.942 "dma_device_type": 2 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "dma_device_id": "system", 00:29:24.942 "dma_device_type": 1 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.942 "dma_device_type": 2 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "dma_device_id": "system", 00:29:24.942 "dma_device_type": 1 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:24.942 "dma_device_type": 2 00:29:24.942 } 00:29:24.942 ], 00:29:24.942 "driver_specific": { 00:29:24.942 "raid": { 00:29:24.942 "uuid": "c6214d55-b981-4537-99ca-82af7824d90c", 00:29:24.942 "strip_size_kb": 64, 00:29:24.942 "state": "online", 00:29:24.942 "raid_level": "raid0", 00:29:24.942 "superblock": false, 00:29:24.942 "num_base_bdevs": 4, 00:29:24.942 "num_base_bdevs_discovered": 4, 00:29:24.942 "num_base_bdevs_operational": 4, 00:29:24.942 "base_bdevs_list": [ 00:29:24.942 { 00:29:24.942 "name": "NewBaseBdev", 00:29:24.942 "uuid": "782aef6b-f959-42d6-999f-0cbf4b9f9e67", 00:29:24.942 "is_configured": true, 00:29:24.942 "data_offset": 0, 00:29:24.942 "data_size": 65536 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "name": "BaseBdev2", 00:29:24.942 "uuid": "6290338c-176b-49b1-8b2f-4d5c150dcc0f", 00:29:24.942 "is_configured": true, 00:29:24.942 "data_offset": 0, 00:29:24.942 "data_size": 65536 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "name": "BaseBdev3", 00:29:24.942 "uuid": "f20237d6-7e5d-4987-aa57-4886f83a6849", 00:29:24.942 "is_configured": true, 00:29:24.942 "data_offset": 0, 00:29:24.942 "data_size": 65536 00:29:24.942 }, 00:29:24.942 { 00:29:24.942 "name": "BaseBdev4", 00:29:24.942 "uuid": "14300a54-327d-410a-8a38-e7136ec6b315", 00:29:24.942 "is_configured": true, 00:29:24.942 "data_offset": 0, 00:29:24.942 "data_size": 65536 00:29:24.942 } 00:29:24.942 ] 00:29:24.942 } 00:29:24.942 } 00:29:24.942 }' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:29:24.942 BaseBdev2 00:29:24.942 BaseBdev3 00:29:24.942 BaseBdev4' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:24.942 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:25.200 [2024-12-06 18:27:55.927207] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:25.200 [2024-12-06 18:27:55.927239] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:25.200 [2024-12-06 18:27:55.927309] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:25.200 [2024-12-06 18:27:55.927388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:25.200 [2024-12-06 18:27:55.927400] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69109 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69109 ']' 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69109 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69109 00:29:25.200 killing process with pid 69109 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69109' 00:29:25.200 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69109 00:29:25.200 [2024-12-06 18:27:55.962987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:25.201 18:27:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69109 00:29:25.459 [2024-12-06 18:27:56.366237] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:29:26.836 ************************************ 00:29:26.836 END TEST raid_state_function_test 00:29:26.836 ************************************ 00:29:26.836 00:29:26.836 real 0m11.318s 00:29:26.836 user 0m17.911s 00:29:26.836 sys 0m2.247s 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:26.836 18:27:57 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:29:26.836 18:27:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:26.836 18:27:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.836 18:27:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:26.836 ************************************ 00:29:26.836 START TEST raid_state_function_test_sb 00:29:26.836 ************************************ 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:29:26.836 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=69775 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69775' 00:29:26.837 Process raid pid: 69775 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 69775 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 69775 ']' 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.837 18:27:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:26.837 [2024-12-06 18:27:57.702343] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:26.837 [2024-12-06 18:27:57.702518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.095 [2024-12-06 18:27:57.901460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.095 [2024-12-06 18:27:58.021913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.353 [2024-12-06 18:27:58.239234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:27.353 [2024-12-06 18:27:58.239280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.612 [2024-12-06 18:27:58.544593] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:27.612 [2024-12-06 18:27:58.544657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:27.612 [2024-12-06 18:27:58.544669] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:27.612 [2024-12-06 18:27:58.544683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:27.612 [2024-12-06 18:27:58.544690] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:27.612 [2024-12-06 18:27:58.544703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:27.612 [2024-12-06 18:27:58.544710] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:27.612 [2024-12-06 18:27:58.544723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:27.612 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:27.871 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.871 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:27.871 "name": "Existed_Raid", 00:29:27.871 "uuid": "225610e4-a9ca-4e81-a869-81e8b82ccbfb", 00:29:27.871 "strip_size_kb": 64, 00:29:27.871 "state": "configuring", 00:29:27.871 "raid_level": "raid0", 00:29:27.871 "superblock": true, 00:29:27.871 "num_base_bdevs": 4, 00:29:27.871 "num_base_bdevs_discovered": 0, 00:29:27.871 "num_base_bdevs_operational": 4, 00:29:27.871 "base_bdevs_list": [ 00:29:27.871 { 00:29:27.871 "name": "BaseBdev1", 00:29:27.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.871 "is_configured": false, 00:29:27.871 "data_offset": 0, 00:29:27.871 "data_size": 0 00:29:27.871 }, 00:29:27.871 { 00:29:27.871 "name": "BaseBdev2", 00:29:27.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.871 "is_configured": false, 00:29:27.871 "data_offset": 0, 00:29:27.871 "data_size": 0 00:29:27.871 }, 00:29:27.871 { 00:29:27.871 "name": "BaseBdev3", 00:29:27.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.871 "is_configured": false, 00:29:27.871 "data_offset": 0, 00:29:27.871 "data_size": 0 00:29:27.871 }, 00:29:27.871 { 00:29:27.871 "name": "BaseBdev4", 00:29:27.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:27.871 "is_configured": false, 00:29:27.871 "data_offset": 0, 00:29:27.871 "data_size": 0 00:29:27.871 } 00:29:27.871 ] 00:29:27.871 }' 00:29:27.871 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:27.871 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.129 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:28.129 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.129 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.129 [2024-12-06 18:27:58.995900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:28.129 [2024-12-06 18:27:58.996077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:28.129 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.129 18:27:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:28.129 18:27:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.129 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.129 [2024-12-06 18:27:59.007902] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:28.129 [2024-12-06 18:27:59.007952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:28.129 [2024-12-06 18:27:59.007963] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:28.129 [2024-12-06 18:27:59.007976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:28.129 [2024-12-06 18:27:59.007985] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:28.129 [2024-12-06 18:27:59.007998] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:28.129 [2024-12-06 18:27:59.008006] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:28.129 [2024-12-06 18:27:59.008019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:28.129 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.129 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:28.129 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.129 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.129 [2024-12-06 18:27:59.060991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:28.129 BaseBdev1 00:29:28.129 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.129 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.130 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.388 [ 00:29:28.388 { 00:29:28.388 "name": "BaseBdev1", 00:29:28.388 "aliases": [ 00:29:28.388 "4b3d113f-aaea-43c5-9d88-a1c1e3c8d1f1" 00:29:28.388 ], 00:29:28.388 "product_name": "Malloc disk", 00:29:28.388 "block_size": 512, 00:29:28.388 "num_blocks": 65536, 00:29:28.388 "uuid": "4b3d113f-aaea-43c5-9d88-a1c1e3c8d1f1", 00:29:28.388 "assigned_rate_limits": { 00:29:28.388 "rw_ios_per_sec": 0, 00:29:28.388 "rw_mbytes_per_sec": 0, 00:29:28.388 "r_mbytes_per_sec": 0, 00:29:28.388 "w_mbytes_per_sec": 0 00:29:28.388 }, 00:29:28.388 "claimed": true, 00:29:28.388 "claim_type": "exclusive_write", 00:29:28.388 "zoned": false, 00:29:28.388 "supported_io_types": { 00:29:28.388 "read": true, 00:29:28.388 "write": true, 00:29:28.388 "unmap": true, 00:29:28.388 "flush": true, 00:29:28.388 "reset": true, 00:29:28.388 "nvme_admin": false, 00:29:28.388 "nvme_io": false, 00:29:28.388 "nvme_io_md": false, 00:29:28.388 "write_zeroes": true, 00:29:28.388 "zcopy": true, 00:29:28.388 "get_zone_info": false, 00:29:28.388 "zone_management": false, 00:29:28.388 "zone_append": false, 00:29:28.388 "compare": false, 00:29:28.388 "compare_and_write": false, 00:29:28.389 "abort": true, 00:29:28.389 "seek_hole": false, 00:29:28.389 "seek_data": false, 00:29:28.389 "copy": true, 00:29:28.389 "nvme_iov_md": false 00:29:28.389 }, 00:29:28.389 "memory_domains": [ 00:29:28.389 { 00:29:28.389 "dma_device_id": "system", 00:29:28.389 "dma_device_type": 1 00:29:28.389 }, 00:29:28.389 { 00:29:28.389 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:28.389 "dma_device_type": 2 00:29:28.389 } 00:29:28.389 ], 00:29:28.389 "driver_specific": {} 00:29:28.389 } 00:29:28.389 ] 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.389 "name": "Existed_Raid", 00:29:28.389 "uuid": "5a3241b9-55aa-4eeb-bc1c-20cc327ce06d", 00:29:28.389 "strip_size_kb": 64, 00:29:28.389 "state": "configuring", 00:29:28.389 "raid_level": "raid0", 00:29:28.389 "superblock": true, 00:29:28.389 "num_base_bdevs": 4, 00:29:28.389 "num_base_bdevs_discovered": 1, 00:29:28.389 "num_base_bdevs_operational": 4, 00:29:28.389 "base_bdevs_list": [ 00:29:28.389 { 00:29:28.389 "name": "BaseBdev1", 00:29:28.389 "uuid": "4b3d113f-aaea-43c5-9d88-a1c1e3c8d1f1", 00:29:28.389 "is_configured": true, 00:29:28.389 "data_offset": 2048, 00:29:28.389 "data_size": 63488 00:29:28.389 }, 00:29:28.389 { 00:29:28.389 "name": "BaseBdev2", 00:29:28.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.389 "is_configured": false, 00:29:28.389 "data_offset": 0, 00:29:28.389 "data_size": 0 00:29:28.389 }, 00:29:28.389 { 00:29:28.389 "name": "BaseBdev3", 00:29:28.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.389 "is_configured": false, 00:29:28.389 "data_offset": 0, 00:29:28.389 "data_size": 0 00:29:28.389 }, 00:29:28.389 { 00:29:28.389 "name": "BaseBdev4", 00:29:28.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.389 "is_configured": false, 00:29:28.389 "data_offset": 0, 00:29:28.389 "data_size": 0 00:29:28.389 } 00:29:28.389 ] 00:29:28.389 }' 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.389 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.649 [2024-12-06 18:27:59.504446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:28.649 [2024-12-06 18:27:59.504503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.649 [2024-12-06 18:27:59.516511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:28.649 [2024-12-06 18:27:59.518873] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:28.649 [2024-12-06 18:27:59.519050] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:28.649 [2024-12-06 18:27:59.519159] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:28.649 [2024-12-06 18:27:59.519241] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:28.649 [2024-12-06 18:27:59.519430] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:28.649 [2024-12-06 18:27:59.519474] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:28.649 "name": "Existed_Raid", 00:29:28.649 "uuid": "03eb6256-ee0c-4058-8f25-666af29e2fb4", 00:29:28.649 "strip_size_kb": 64, 00:29:28.649 "state": "configuring", 00:29:28.649 "raid_level": "raid0", 00:29:28.649 "superblock": true, 00:29:28.649 "num_base_bdevs": 4, 00:29:28.649 "num_base_bdevs_discovered": 1, 00:29:28.649 "num_base_bdevs_operational": 4, 00:29:28.649 "base_bdevs_list": [ 00:29:28.649 { 00:29:28.649 "name": "BaseBdev1", 00:29:28.649 "uuid": "4b3d113f-aaea-43c5-9d88-a1c1e3c8d1f1", 00:29:28.649 "is_configured": true, 00:29:28.649 "data_offset": 2048, 00:29:28.649 "data_size": 63488 00:29:28.649 }, 00:29:28.649 { 00:29:28.649 "name": "BaseBdev2", 00:29:28.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.649 "is_configured": false, 00:29:28.649 "data_offset": 0, 00:29:28.649 "data_size": 0 00:29:28.649 }, 00:29:28.649 { 00:29:28.649 "name": "BaseBdev3", 00:29:28.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.649 "is_configured": false, 00:29:28.649 "data_offset": 0, 00:29:28.649 "data_size": 0 00:29:28.649 }, 00:29:28.649 { 00:29:28.649 "name": "BaseBdev4", 00:29:28.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:28.649 "is_configured": false, 00:29:28.649 "data_offset": 0, 00:29:28.649 "data_size": 0 00:29:28.649 } 00:29:28.649 ] 00:29:28.649 }' 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:28.649 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.250 18:27:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:29.250 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.250 18:27:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.250 [2024-12-06 18:28:00.007628] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:29.250 BaseBdev2 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.250 [ 00:29:29.250 { 00:29:29.250 "name": "BaseBdev2", 00:29:29.250 "aliases": [ 00:29:29.250 "c8a7a221-f9d0-46e6-9e24-1496c71b1f76" 00:29:29.250 ], 00:29:29.250 "product_name": "Malloc disk", 00:29:29.250 "block_size": 512, 00:29:29.250 "num_blocks": 65536, 00:29:29.250 "uuid": "c8a7a221-f9d0-46e6-9e24-1496c71b1f76", 00:29:29.250 "assigned_rate_limits": { 00:29:29.250 "rw_ios_per_sec": 0, 00:29:29.250 "rw_mbytes_per_sec": 0, 00:29:29.250 "r_mbytes_per_sec": 0, 00:29:29.250 "w_mbytes_per_sec": 0 00:29:29.250 }, 00:29:29.250 "claimed": true, 00:29:29.250 "claim_type": "exclusive_write", 00:29:29.250 "zoned": false, 00:29:29.250 "supported_io_types": { 00:29:29.250 "read": true, 00:29:29.250 "write": true, 00:29:29.250 "unmap": true, 00:29:29.250 "flush": true, 00:29:29.250 "reset": true, 00:29:29.250 "nvme_admin": false, 00:29:29.250 "nvme_io": false, 00:29:29.250 "nvme_io_md": false, 00:29:29.250 "write_zeroes": true, 00:29:29.250 "zcopy": true, 00:29:29.250 "get_zone_info": false, 00:29:29.250 "zone_management": false, 00:29:29.250 "zone_append": false, 00:29:29.250 "compare": false, 00:29:29.250 "compare_and_write": false, 00:29:29.250 "abort": true, 00:29:29.250 "seek_hole": false, 00:29:29.250 "seek_data": false, 00:29:29.250 "copy": true, 00:29:29.250 "nvme_iov_md": false 00:29:29.250 }, 00:29:29.250 "memory_domains": [ 00:29:29.250 { 00:29:29.250 "dma_device_id": "system", 00:29:29.250 "dma_device_type": 1 00:29:29.250 }, 00:29:29.250 { 00:29:29.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.250 "dma_device_type": 2 00:29:29.250 } 00:29:29.250 ], 00:29:29.250 "driver_specific": {} 00:29:29.250 } 00:29:29.250 ] 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:29.250 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.251 "name": "Existed_Raid", 00:29:29.251 "uuid": "03eb6256-ee0c-4058-8f25-666af29e2fb4", 00:29:29.251 "strip_size_kb": 64, 00:29:29.251 "state": "configuring", 00:29:29.251 "raid_level": "raid0", 00:29:29.251 "superblock": true, 00:29:29.251 "num_base_bdevs": 4, 00:29:29.251 "num_base_bdevs_discovered": 2, 00:29:29.251 "num_base_bdevs_operational": 4, 00:29:29.251 "base_bdevs_list": [ 00:29:29.251 { 00:29:29.251 "name": "BaseBdev1", 00:29:29.251 "uuid": "4b3d113f-aaea-43c5-9d88-a1c1e3c8d1f1", 00:29:29.251 "is_configured": true, 00:29:29.251 "data_offset": 2048, 00:29:29.251 "data_size": 63488 00:29:29.251 }, 00:29:29.251 { 00:29:29.251 "name": "BaseBdev2", 00:29:29.251 "uuid": "c8a7a221-f9d0-46e6-9e24-1496c71b1f76", 00:29:29.251 "is_configured": true, 00:29:29.251 "data_offset": 2048, 00:29:29.251 "data_size": 63488 00:29:29.251 }, 00:29:29.251 { 00:29:29.251 "name": "BaseBdev3", 00:29:29.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.251 "is_configured": false, 00:29:29.251 "data_offset": 0, 00:29:29.251 "data_size": 0 00:29:29.251 }, 00:29:29.251 { 00:29:29.251 "name": "BaseBdev4", 00:29:29.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.251 "is_configured": false, 00:29:29.251 "data_offset": 0, 00:29:29.251 "data_size": 0 00:29:29.251 } 00:29:29.251 ] 00:29:29.251 }' 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.251 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.831 [2024-12-06 18:28:00.533099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:29.831 BaseBdev3 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.831 [ 00:29:29.831 { 00:29:29.831 "name": "BaseBdev3", 00:29:29.831 "aliases": [ 00:29:29.831 "abdc279a-48f6-4930-ae82-347257333690" 00:29:29.831 ], 00:29:29.831 "product_name": "Malloc disk", 00:29:29.831 "block_size": 512, 00:29:29.831 "num_blocks": 65536, 00:29:29.831 "uuid": "abdc279a-48f6-4930-ae82-347257333690", 00:29:29.831 "assigned_rate_limits": { 00:29:29.831 "rw_ios_per_sec": 0, 00:29:29.831 "rw_mbytes_per_sec": 0, 00:29:29.831 "r_mbytes_per_sec": 0, 00:29:29.831 "w_mbytes_per_sec": 0 00:29:29.831 }, 00:29:29.831 "claimed": true, 00:29:29.831 "claim_type": "exclusive_write", 00:29:29.831 "zoned": false, 00:29:29.831 "supported_io_types": { 00:29:29.831 "read": true, 00:29:29.831 "write": true, 00:29:29.831 "unmap": true, 00:29:29.831 "flush": true, 00:29:29.831 "reset": true, 00:29:29.831 "nvme_admin": false, 00:29:29.831 "nvme_io": false, 00:29:29.831 "nvme_io_md": false, 00:29:29.831 "write_zeroes": true, 00:29:29.831 "zcopy": true, 00:29:29.831 "get_zone_info": false, 00:29:29.831 "zone_management": false, 00:29:29.831 "zone_append": false, 00:29:29.831 "compare": false, 00:29:29.831 "compare_and_write": false, 00:29:29.831 "abort": true, 00:29:29.831 "seek_hole": false, 00:29:29.831 "seek_data": false, 00:29:29.831 "copy": true, 00:29:29.831 "nvme_iov_md": false 00:29:29.831 }, 00:29:29.831 "memory_domains": [ 00:29:29.831 { 00:29:29.831 "dma_device_id": "system", 00:29:29.831 "dma_device_type": 1 00:29:29.831 }, 00:29:29.831 { 00:29:29.831 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:29.831 "dma_device_type": 2 00:29:29.831 } 00:29:29.831 ], 00:29:29.831 "driver_specific": {} 00:29:29.831 } 00:29:29.831 ] 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:29.831 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:29.832 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:29.832 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.832 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:29.832 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.832 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:29.832 "name": "Existed_Raid", 00:29:29.832 "uuid": "03eb6256-ee0c-4058-8f25-666af29e2fb4", 00:29:29.832 "strip_size_kb": 64, 00:29:29.832 "state": "configuring", 00:29:29.832 "raid_level": "raid0", 00:29:29.832 "superblock": true, 00:29:29.832 "num_base_bdevs": 4, 00:29:29.832 "num_base_bdevs_discovered": 3, 00:29:29.832 "num_base_bdevs_operational": 4, 00:29:29.832 "base_bdevs_list": [ 00:29:29.832 { 00:29:29.832 "name": "BaseBdev1", 00:29:29.832 "uuid": "4b3d113f-aaea-43c5-9d88-a1c1e3c8d1f1", 00:29:29.832 "is_configured": true, 00:29:29.832 "data_offset": 2048, 00:29:29.832 "data_size": 63488 00:29:29.832 }, 00:29:29.832 { 00:29:29.832 "name": "BaseBdev2", 00:29:29.832 "uuid": "c8a7a221-f9d0-46e6-9e24-1496c71b1f76", 00:29:29.832 "is_configured": true, 00:29:29.832 "data_offset": 2048, 00:29:29.832 "data_size": 63488 00:29:29.832 }, 00:29:29.832 { 00:29:29.832 "name": "BaseBdev3", 00:29:29.832 "uuid": "abdc279a-48f6-4930-ae82-347257333690", 00:29:29.832 "is_configured": true, 00:29:29.832 "data_offset": 2048, 00:29:29.832 "data_size": 63488 00:29:29.832 }, 00:29:29.832 { 00:29:29.832 "name": "BaseBdev4", 00:29:29.832 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:29.832 "is_configured": false, 00:29:29.832 "data_offset": 0, 00:29:29.832 "data_size": 0 00:29:29.832 } 00:29:29.832 ] 00:29:29.832 }' 00:29:29.832 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:29.832 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.092 18:28:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:29:30.092 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.092 18:28:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.092 [2024-12-06 18:28:01.006184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:30.092 [2024-12-06 18:28:01.006678] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:30.092 [2024-12-06 18:28:01.006701] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:30.092 [2024-12-06 18:28:01.007010] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:30.092 [2024-12-06 18:28:01.007150] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:30.092 [2024-12-06 18:28:01.007180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:30.092 [2024-12-06 18:28:01.007334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:30.092 BaseBdev4 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.092 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.092 [ 00:29:30.092 { 00:29:30.092 "name": "BaseBdev4", 00:29:30.092 "aliases": [ 00:29:30.092 "2bdf455b-7beb-4151-8d99-903ad66d7601" 00:29:30.092 ], 00:29:30.092 "product_name": "Malloc disk", 00:29:30.092 "block_size": 512, 00:29:30.092 "num_blocks": 65536, 00:29:30.092 "uuid": "2bdf455b-7beb-4151-8d99-903ad66d7601", 00:29:30.092 "assigned_rate_limits": { 00:29:30.092 "rw_ios_per_sec": 0, 00:29:30.092 "rw_mbytes_per_sec": 0, 00:29:30.092 "r_mbytes_per_sec": 0, 00:29:30.092 "w_mbytes_per_sec": 0 00:29:30.092 }, 00:29:30.092 "claimed": true, 00:29:30.092 "claim_type": "exclusive_write", 00:29:30.092 "zoned": false, 00:29:30.092 "supported_io_types": { 00:29:30.092 "read": true, 00:29:30.092 "write": true, 00:29:30.351 "unmap": true, 00:29:30.351 "flush": true, 00:29:30.351 "reset": true, 00:29:30.351 "nvme_admin": false, 00:29:30.351 "nvme_io": false, 00:29:30.351 "nvme_io_md": false, 00:29:30.351 "write_zeroes": true, 00:29:30.351 "zcopy": true, 00:29:30.351 "get_zone_info": false, 00:29:30.351 "zone_management": false, 00:29:30.351 "zone_append": false, 00:29:30.351 "compare": false, 00:29:30.351 "compare_and_write": false, 00:29:30.351 "abort": true, 00:29:30.351 "seek_hole": false, 00:29:30.351 "seek_data": false, 00:29:30.351 "copy": true, 00:29:30.351 "nvme_iov_md": false 00:29:30.351 }, 00:29:30.351 "memory_domains": [ 00:29:30.351 { 00:29:30.351 "dma_device_id": "system", 00:29:30.351 "dma_device_type": 1 00:29:30.351 }, 00:29:30.351 { 00:29:30.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.351 "dma_device_type": 2 00:29:30.352 } 00:29:30.352 ], 00:29:30.352 "driver_specific": {} 00:29:30.352 } 00:29:30.352 ] 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:30.352 "name": "Existed_Raid", 00:29:30.352 "uuid": "03eb6256-ee0c-4058-8f25-666af29e2fb4", 00:29:30.352 "strip_size_kb": 64, 00:29:30.352 "state": "online", 00:29:30.352 "raid_level": "raid0", 00:29:30.352 "superblock": true, 00:29:30.352 "num_base_bdevs": 4, 00:29:30.352 "num_base_bdevs_discovered": 4, 00:29:30.352 "num_base_bdevs_operational": 4, 00:29:30.352 "base_bdevs_list": [ 00:29:30.352 { 00:29:30.352 "name": "BaseBdev1", 00:29:30.352 "uuid": "4b3d113f-aaea-43c5-9d88-a1c1e3c8d1f1", 00:29:30.352 "is_configured": true, 00:29:30.352 "data_offset": 2048, 00:29:30.352 "data_size": 63488 00:29:30.352 }, 00:29:30.352 { 00:29:30.352 "name": "BaseBdev2", 00:29:30.352 "uuid": "c8a7a221-f9d0-46e6-9e24-1496c71b1f76", 00:29:30.352 "is_configured": true, 00:29:30.352 "data_offset": 2048, 00:29:30.352 "data_size": 63488 00:29:30.352 }, 00:29:30.352 { 00:29:30.352 "name": "BaseBdev3", 00:29:30.352 "uuid": "abdc279a-48f6-4930-ae82-347257333690", 00:29:30.352 "is_configured": true, 00:29:30.352 "data_offset": 2048, 00:29:30.352 "data_size": 63488 00:29:30.352 }, 00:29:30.352 { 00:29:30.352 "name": "BaseBdev4", 00:29:30.352 "uuid": "2bdf455b-7beb-4151-8d99-903ad66d7601", 00:29:30.352 "is_configured": true, 00:29:30.352 "data_offset": 2048, 00:29:30.352 "data_size": 63488 00:29:30.352 } 00:29:30.352 ] 00:29:30.352 }' 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:30.352 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.612 [2024-12-06 18:28:01.510167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:30.612 "name": "Existed_Raid", 00:29:30.612 "aliases": [ 00:29:30.612 "03eb6256-ee0c-4058-8f25-666af29e2fb4" 00:29:30.612 ], 00:29:30.612 "product_name": "Raid Volume", 00:29:30.612 "block_size": 512, 00:29:30.612 "num_blocks": 253952, 00:29:30.612 "uuid": "03eb6256-ee0c-4058-8f25-666af29e2fb4", 00:29:30.612 "assigned_rate_limits": { 00:29:30.612 "rw_ios_per_sec": 0, 00:29:30.612 "rw_mbytes_per_sec": 0, 00:29:30.612 "r_mbytes_per_sec": 0, 00:29:30.612 "w_mbytes_per_sec": 0 00:29:30.612 }, 00:29:30.612 "claimed": false, 00:29:30.612 "zoned": false, 00:29:30.612 "supported_io_types": { 00:29:30.612 "read": true, 00:29:30.612 "write": true, 00:29:30.612 "unmap": true, 00:29:30.612 "flush": true, 00:29:30.612 "reset": true, 00:29:30.612 "nvme_admin": false, 00:29:30.612 "nvme_io": false, 00:29:30.612 "nvme_io_md": false, 00:29:30.612 "write_zeroes": true, 00:29:30.612 "zcopy": false, 00:29:30.612 "get_zone_info": false, 00:29:30.612 "zone_management": false, 00:29:30.612 "zone_append": false, 00:29:30.612 "compare": false, 00:29:30.612 "compare_and_write": false, 00:29:30.612 "abort": false, 00:29:30.612 "seek_hole": false, 00:29:30.612 "seek_data": false, 00:29:30.612 "copy": false, 00:29:30.612 "nvme_iov_md": false 00:29:30.612 }, 00:29:30.612 "memory_domains": [ 00:29:30.612 { 00:29:30.612 "dma_device_id": "system", 00:29:30.612 "dma_device_type": 1 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.612 "dma_device_type": 2 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "dma_device_id": "system", 00:29:30.612 "dma_device_type": 1 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.612 "dma_device_type": 2 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "dma_device_id": "system", 00:29:30.612 "dma_device_type": 1 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.612 "dma_device_type": 2 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "dma_device_id": "system", 00:29:30.612 "dma_device_type": 1 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:30.612 "dma_device_type": 2 00:29:30.612 } 00:29:30.612 ], 00:29:30.612 "driver_specific": { 00:29:30.612 "raid": { 00:29:30.612 "uuid": "03eb6256-ee0c-4058-8f25-666af29e2fb4", 00:29:30.612 "strip_size_kb": 64, 00:29:30.612 "state": "online", 00:29:30.612 "raid_level": "raid0", 00:29:30.612 "superblock": true, 00:29:30.612 "num_base_bdevs": 4, 00:29:30.612 "num_base_bdevs_discovered": 4, 00:29:30.612 "num_base_bdevs_operational": 4, 00:29:30.612 "base_bdevs_list": [ 00:29:30.612 { 00:29:30.612 "name": "BaseBdev1", 00:29:30.612 "uuid": "4b3d113f-aaea-43c5-9d88-a1c1e3c8d1f1", 00:29:30.612 "is_configured": true, 00:29:30.612 "data_offset": 2048, 00:29:30.612 "data_size": 63488 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "name": "BaseBdev2", 00:29:30.612 "uuid": "c8a7a221-f9d0-46e6-9e24-1496c71b1f76", 00:29:30.612 "is_configured": true, 00:29:30.612 "data_offset": 2048, 00:29:30.612 "data_size": 63488 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "name": "BaseBdev3", 00:29:30.612 "uuid": "abdc279a-48f6-4930-ae82-347257333690", 00:29:30.612 "is_configured": true, 00:29:30.612 "data_offset": 2048, 00:29:30.612 "data_size": 63488 00:29:30.612 }, 00:29:30.612 { 00:29:30.612 "name": "BaseBdev4", 00:29:30.612 "uuid": "2bdf455b-7beb-4151-8d99-903ad66d7601", 00:29:30.612 "is_configured": true, 00:29:30.612 "data_offset": 2048, 00:29:30.612 "data_size": 63488 00:29:30.612 } 00:29:30.612 ] 00:29:30.612 } 00:29:30.612 } 00:29:30.612 }' 00:29:30.612 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:30.871 BaseBdev2 00:29:30.871 BaseBdev3 00:29:30.871 BaseBdev4' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:30.871 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.131 [2024-12-06 18:28:01.845849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:31.131 [2024-12-06 18:28:01.846004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:31.131 [2024-12-06 18:28:01.846079] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.131 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:31.131 "name": "Existed_Raid", 00:29:31.131 "uuid": "03eb6256-ee0c-4058-8f25-666af29e2fb4", 00:29:31.131 "strip_size_kb": 64, 00:29:31.131 "state": "offline", 00:29:31.131 "raid_level": "raid0", 00:29:31.131 "superblock": true, 00:29:31.131 "num_base_bdevs": 4, 00:29:31.131 "num_base_bdevs_discovered": 3, 00:29:31.131 "num_base_bdevs_operational": 3, 00:29:31.131 "base_bdevs_list": [ 00:29:31.131 { 00:29:31.131 "name": null, 00:29:31.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:31.131 "is_configured": false, 00:29:31.131 "data_offset": 0, 00:29:31.131 "data_size": 63488 00:29:31.131 }, 00:29:31.131 { 00:29:31.131 "name": "BaseBdev2", 00:29:31.131 "uuid": "c8a7a221-f9d0-46e6-9e24-1496c71b1f76", 00:29:31.131 "is_configured": true, 00:29:31.131 "data_offset": 2048, 00:29:31.131 "data_size": 63488 00:29:31.131 }, 00:29:31.131 { 00:29:31.131 "name": "BaseBdev3", 00:29:31.131 "uuid": "abdc279a-48f6-4930-ae82-347257333690", 00:29:31.132 "is_configured": true, 00:29:31.132 "data_offset": 2048, 00:29:31.132 "data_size": 63488 00:29:31.132 }, 00:29:31.132 { 00:29:31.132 "name": "BaseBdev4", 00:29:31.132 "uuid": "2bdf455b-7beb-4151-8d99-903ad66d7601", 00:29:31.132 "is_configured": true, 00:29:31.132 "data_offset": 2048, 00:29:31.132 "data_size": 63488 00:29:31.132 } 00:29:31.132 ] 00:29:31.132 }' 00:29:31.132 18:28:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:31.132 18:28:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.701 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.702 [2024-12-06 18:28:02.458360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.702 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.702 [2024-12-06 18:28:02.610311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.961 [2024-12-06 18:28:02.767153] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:29:31.961 [2024-12-06 18:28:02.767396] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:31.961 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.962 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:31.962 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:31.962 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:31.962 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:31.962 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.962 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:31.962 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.221 BaseBdev2 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.221 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.222 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.222 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:32.222 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.222 18:28:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.222 [ 00:29:32.222 { 00:29:32.222 "name": "BaseBdev2", 00:29:32.222 "aliases": [ 00:29:32.222 "b008b089-e9f4-433f-9546-56d1929ec336" 00:29:32.222 ], 00:29:32.222 "product_name": "Malloc disk", 00:29:32.222 "block_size": 512, 00:29:32.222 "num_blocks": 65536, 00:29:32.222 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:32.222 "assigned_rate_limits": { 00:29:32.222 "rw_ios_per_sec": 0, 00:29:32.222 "rw_mbytes_per_sec": 0, 00:29:32.222 "r_mbytes_per_sec": 0, 00:29:32.222 "w_mbytes_per_sec": 0 00:29:32.222 }, 00:29:32.222 "claimed": false, 00:29:32.222 "zoned": false, 00:29:32.222 "supported_io_types": { 00:29:32.222 "read": true, 00:29:32.222 "write": true, 00:29:32.222 "unmap": true, 00:29:32.222 "flush": true, 00:29:32.222 "reset": true, 00:29:32.222 "nvme_admin": false, 00:29:32.222 "nvme_io": false, 00:29:32.222 "nvme_io_md": false, 00:29:32.222 "write_zeroes": true, 00:29:32.222 "zcopy": true, 00:29:32.222 "get_zone_info": false, 00:29:32.222 "zone_management": false, 00:29:32.222 "zone_append": false, 00:29:32.222 "compare": false, 00:29:32.222 "compare_and_write": false, 00:29:32.222 "abort": true, 00:29:32.222 "seek_hole": false, 00:29:32.222 "seek_data": false, 00:29:32.222 "copy": true, 00:29:32.222 "nvme_iov_md": false 00:29:32.222 }, 00:29:32.222 "memory_domains": [ 00:29:32.222 { 00:29:32.222 "dma_device_id": "system", 00:29:32.222 "dma_device_type": 1 00:29:32.222 }, 00:29:32.222 { 00:29:32.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:32.222 "dma_device_type": 2 00:29:32.222 } 00:29:32.222 ], 00:29:32.222 "driver_specific": {} 00:29:32.222 } 00:29:32.222 ] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.222 BaseBdev3 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.222 [ 00:29:32.222 { 00:29:32.222 "name": "BaseBdev3", 00:29:32.222 "aliases": [ 00:29:32.222 "a1a482a7-73ba-4e69-a1ad-52ec9c553349" 00:29:32.222 ], 00:29:32.222 "product_name": "Malloc disk", 00:29:32.222 "block_size": 512, 00:29:32.222 "num_blocks": 65536, 00:29:32.222 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:32.222 "assigned_rate_limits": { 00:29:32.222 "rw_ios_per_sec": 0, 00:29:32.222 "rw_mbytes_per_sec": 0, 00:29:32.222 "r_mbytes_per_sec": 0, 00:29:32.222 "w_mbytes_per_sec": 0 00:29:32.222 }, 00:29:32.222 "claimed": false, 00:29:32.222 "zoned": false, 00:29:32.222 "supported_io_types": { 00:29:32.222 "read": true, 00:29:32.222 "write": true, 00:29:32.222 "unmap": true, 00:29:32.222 "flush": true, 00:29:32.222 "reset": true, 00:29:32.222 "nvme_admin": false, 00:29:32.222 "nvme_io": false, 00:29:32.222 "nvme_io_md": false, 00:29:32.222 "write_zeroes": true, 00:29:32.222 "zcopy": true, 00:29:32.222 "get_zone_info": false, 00:29:32.222 "zone_management": false, 00:29:32.222 "zone_append": false, 00:29:32.222 "compare": false, 00:29:32.222 "compare_and_write": false, 00:29:32.222 "abort": true, 00:29:32.222 "seek_hole": false, 00:29:32.222 "seek_data": false, 00:29:32.222 "copy": true, 00:29:32.222 "nvme_iov_md": false 00:29:32.222 }, 00:29:32.222 "memory_domains": [ 00:29:32.222 { 00:29:32.222 "dma_device_id": "system", 00:29:32.222 "dma_device_type": 1 00:29:32.222 }, 00:29:32.222 { 00:29:32.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:32.222 "dma_device_type": 2 00:29:32.222 } 00:29:32.222 ], 00:29:32.222 "driver_specific": {} 00:29:32.222 } 00:29:32.222 ] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.222 BaseBdev4 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.222 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.222 [ 00:29:32.222 { 00:29:32.222 "name": "BaseBdev4", 00:29:32.222 "aliases": [ 00:29:32.483 "3b519c30-a034-4aa5-8f24-47fd6b280802" 00:29:32.483 ], 00:29:32.483 "product_name": "Malloc disk", 00:29:32.483 "block_size": 512, 00:29:32.483 "num_blocks": 65536, 00:29:32.483 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:32.483 "assigned_rate_limits": { 00:29:32.483 "rw_ios_per_sec": 0, 00:29:32.483 "rw_mbytes_per_sec": 0, 00:29:32.483 "r_mbytes_per_sec": 0, 00:29:32.483 "w_mbytes_per_sec": 0 00:29:32.483 }, 00:29:32.483 "claimed": false, 00:29:32.483 "zoned": false, 00:29:32.483 "supported_io_types": { 00:29:32.483 "read": true, 00:29:32.483 "write": true, 00:29:32.483 "unmap": true, 00:29:32.483 "flush": true, 00:29:32.483 "reset": true, 00:29:32.483 "nvme_admin": false, 00:29:32.483 "nvme_io": false, 00:29:32.483 "nvme_io_md": false, 00:29:32.483 "write_zeroes": true, 00:29:32.483 "zcopy": true, 00:29:32.483 "get_zone_info": false, 00:29:32.483 "zone_management": false, 00:29:32.483 "zone_append": false, 00:29:32.483 "compare": false, 00:29:32.483 "compare_and_write": false, 00:29:32.483 "abort": true, 00:29:32.483 "seek_hole": false, 00:29:32.483 "seek_data": false, 00:29:32.483 "copy": true, 00:29:32.483 "nvme_iov_md": false 00:29:32.483 }, 00:29:32.483 "memory_domains": [ 00:29:32.483 { 00:29:32.483 "dma_device_id": "system", 00:29:32.483 "dma_device_type": 1 00:29:32.483 }, 00:29:32.483 { 00:29:32.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:32.483 "dma_device_type": 2 00:29:32.483 } 00:29:32.483 ], 00:29:32.483 "driver_specific": {} 00:29:32.483 } 00:29:32.483 ] 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.483 [2024-12-06 18:28:03.194053] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:32.483 [2024-12-06 18:28:03.194103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:32.483 [2024-12-06 18:28:03.194129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:32.483 [2024-12-06 18:28:03.196336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:32.483 [2024-12-06 18:28:03.196389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:32.483 "name": "Existed_Raid", 00:29:32.483 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:32.483 "strip_size_kb": 64, 00:29:32.483 "state": "configuring", 00:29:32.483 "raid_level": "raid0", 00:29:32.483 "superblock": true, 00:29:32.483 "num_base_bdevs": 4, 00:29:32.483 "num_base_bdevs_discovered": 3, 00:29:32.483 "num_base_bdevs_operational": 4, 00:29:32.483 "base_bdevs_list": [ 00:29:32.483 { 00:29:32.483 "name": "BaseBdev1", 00:29:32.483 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.483 "is_configured": false, 00:29:32.483 "data_offset": 0, 00:29:32.483 "data_size": 0 00:29:32.483 }, 00:29:32.483 { 00:29:32.483 "name": "BaseBdev2", 00:29:32.483 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:32.483 "is_configured": true, 00:29:32.483 "data_offset": 2048, 00:29:32.483 "data_size": 63488 00:29:32.483 }, 00:29:32.483 { 00:29:32.483 "name": "BaseBdev3", 00:29:32.483 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:32.483 "is_configured": true, 00:29:32.483 "data_offset": 2048, 00:29:32.483 "data_size": 63488 00:29:32.483 }, 00:29:32.483 { 00:29:32.483 "name": "BaseBdev4", 00:29:32.483 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:32.483 "is_configured": true, 00:29:32.483 "data_offset": 2048, 00:29:32.483 "data_size": 63488 00:29:32.483 } 00:29:32.483 ] 00:29:32.483 }' 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:32.483 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.743 [2024-12-06 18:28:03.589817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.743 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:32.743 "name": "Existed_Raid", 00:29:32.743 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:32.743 "strip_size_kb": 64, 00:29:32.743 "state": "configuring", 00:29:32.744 "raid_level": "raid0", 00:29:32.744 "superblock": true, 00:29:32.744 "num_base_bdevs": 4, 00:29:32.744 "num_base_bdevs_discovered": 2, 00:29:32.744 "num_base_bdevs_operational": 4, 00:29:32.744 "base_bdevs_list": [ 00:29:32.744 { 00:29:32.744 "name": "BaseBdev1", 00:29:32.744 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:32.744 "is_configured": false, 00:29:32.744 "data_offset": 0, 00:29:32.744 "data_size": 0 00:29:32.744 }, 00:29:32.744 { 00:29:32.744 "name": null, 00:29:32.744 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:32.744 "is_configured": false, 00:29:32.744 "data_offset": 0, 00:29:32.744 "data_size": 63488 00:29:32.744 }, 00:29:32.744 { 00:29:32.744 "name": "BaseBdev3", 00:29:32.744 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:32.744 "is_configured": true, 00:29:32.744 "data_offset": 2048, 00:29:32.744 "data_size": 63488 00:29:32.744 }, 00:29:32.744 { 00:29:32.744 "name": "BaseBdev4", 00:29:32.744 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:32.744 "is_configured": true, 00:29:32.744 "data_offset": 2048, 00:29:32.744 "data_size": 63488 00:29:32.744 } 00:29:32.744 ] 00:29:32.744 }' 00:29:32.744 18:28:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:32.744 18:28:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.312 [2024-12-06 18:28:04.115711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:33.312 BaseBdev1 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.312 [ 00:29:33.312 { 00:29:33.312 "name": "BaseBdev1", 00:29:33.312 "aliases": [ 00:29:33.312 "1d1168c9-5343-4573-8371-7cfc87599b48" 00:29:33.312 ], 00:29:33.312 "product_name": "Malloc disk", 00:29:33.312 "block_size": 512, 00:29:33.312 "num_blocks": 65536, 00:29:33.312 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:33.312 "assigned_rate_limits": { 00:29:33.312 "rw_ios_per_sec": 0, 00:29:33.312 "rw_mbytes_per_sec": 0, 00:29:33.312 "r_mbytes_per_sec": 0, 00:29:33.312 "w_mbytes_per_sec": 0 00:29:33.312 }, 00:29:33.312 "claimed": true, 00:29:33.312 "claim_type": "exclusive_write", 00:29:33.312 "zoned": false, 00:29:33.312 "supported_io_types": { 00:29:33.312 "read": true, 00:29:33.312 "write": true, 00:29:33.312 "unmap": true, 00:29:33.312 "flush": true, 00:29:33.312 "reset": true, 00:29:33.312 "nvme_admin": false, 00:29:33.312 "nvme_io": false, 00:29:33.312 "nvme_io_md": false, 00:29:33.312 "write_zeroes": true, 00:29:33.312 "zcopy": true, 00:29:33.312 "get_zone_info": false, 00:29:33.312 "zone_management": false, 00:29:33.312 "zone_append": false, 00:29:33.312 "compare": false, 00:29:33.312 "compare_and_write": false, 00:29:33.312 "abort": true, 00:29:33.312 "seek_hole": false, 00:29:33.312 "seek_data": false, 00:29:33.312 "copy": true, 00:29:33.312 "nvme_iov_md": false 00:29:33.312 }, 00:29:33.312 "memory_domains": [ 00:29:33.312 { 00:29:33.312 "dma_device_id": "system", 00:29:33.312 "dma_device_type": 1 00:29:33.312 }, 00:29:33.312 { 00:29:33.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:33.312 "dma_device_type": 2 00:29:33.312 } 00:29:33.312 ], 00:29:33.312 "driver_specific": {} 00:29:33.312 } 00:29:33.312 ] 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.312 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.313 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.313 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.313 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:33.313 "name": "Existed_Raid", 00:29:33.313 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:33.313 "strip_size_kb": 64, 00:29:33.313 "state": "configuring", 00:29:33.313 "raid_level": "raid0", 00:29:33.313 "superblock": true, 00:29:33.313 "num_base_bdevs": 4, 00:29:33.313 "num_base_bdevs_discovered": 3, 00:29:33.313 "num_base_bdevs_operational": 4, 00:29:33.313 "base_bdevs_list": [ 00:29:33.313 { 00:29:33.313 "name": "BaseBdev1", 00:29:33.313 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:33.313 "is_configured": true, 00:29:33.313 "data_offset": 2048, 00:29:33.313 "data_size": 63488 00:29:33.313 }, 00:29:33.313 { 00:29:33.313 "name": null, 00:29:33.313 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:33.313 "is_configured": false, 00:29:33.313 "data_offset": 0, 00:29:33.313 "data_size": 63488 00:29:33.313 }, 00:29:33.313 { 00:29:33.313 "name": "BaseBdev3", 00:29:33.313 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:33.313 "is_configured": true, 00:29:33.313 "data_offset": 2048, 00:29:33.313 "data_size": 63488 00:29:33.313 }, 00:29:33.313 { 00:29:33.313 "name": "BaseBdev4", 00:29:33.313 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:33.313 "is_configured": true, 00:29:33.313 "data_offset": 2048, 00:29:33.313 "data_size": 63488 00:29:33.313 } 00:29:33.313 ] 00:29:33.313 }' 00:29:33.313 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:33.313 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.880 [2024-12-06 18:28:04.675073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:33.880 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:33.881 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:33.881 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:33.881 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.881 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:33.881 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.881 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:33.881 "name": "Existed_Raid", 00:29:33.881 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:33.881 "strip_size_kb": 64, 00:29:33.881 "state": "configuring", 00:29:33.881 "raid_level": "raid0", 00:29:33.881 "superblock": true, 00:29:33.881 "num_base_bdevs": 4, 00:29:33.881 "num_base_bdevs_discovered": 2, 00:29:33.881 "num_base_bdevs_operational": 4, 00:29:33.881 "base_bdevs_list": [ 00:29:33.881 { 00:29:33.881 "name": "BaseBdev1", 00:29:33.881 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:33.881 "is_configured": true, 00:29:33.881 "data_offset": 2048, 00:29:33.881 "data_size": 63488 00:29:33.881 }, 00:29:33.881 { 00:29:33.881 "name": null, 00:29:33.881 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:33.881 "is_configured": false, 00:29:33.881 "data_offset": 0, 00:29:33.881 "data_size": 63488 00:29:33.881 }, 00:29:33.881 { 00:29:33.881 "name": null, 00:29:33.881 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:33.881 "is_configured": false, 00:29:33.881 "data_offset": 0, 00:29:33.881 "data_size": 63488 00:29:33.881 }, 00:29:33.881 { 00:29:33.881 "name": "BaseBdev4", 00:29:33.881 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:33.881 "is_configured": true, 00:29:33.881 "data_offset": 2048, 00:29:33.881 "data_size": 63488 00:29:33.881 } 00:29:33.881 ] 00:29:33.881 }' 00:29:33.881 18:28:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:33.881 18:28:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.449 [2024-12-06 18:28:05.186357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:34.449 "name": "Existed_Raid", 00:29:34.449 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:34.449 "strip_size_kb": 64, 00:29:34.449 "state": "configuring", 00:29:34.449 "raid_level": "raid0", 00:29:34.449 "superblock": true, 00:29:34.449 "num_base_bdevs": 4, 00:29:34.449 "num_base_bdevs_discovered": 3, 00:29:34.449 "num_base_bdevs_operational": 4, 00:29:34.449 "base_bdevs_list": [ 00:29:34.449 { 00:29:34.449 "name": "BaseBdev1", 00:29:34.449 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:34.449 "is_configured": true, 00:29:34.449 "data_offset": 2048, 00:29:34.449 "data_size": 63488 00:29:34.449 }, 00:29:34.449 { 00:29:34.449 "name": null, 00:29:34.449 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:34.449 "is_configured": false, 00:29:34.449 "data_offset": 0, 00:29:34.449 "data_size": 63488 00:29:34.449 }, 00:29:34.449 { 00:29:34.449 "name": "BaseBdev3", 00:29:34.449 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:34.449 "is_configured": true, 00:29:34.449 "data_offset": 2048, 00:29:34.449 "data_size": 63488 00:29:34.449 }, 00:29:34.449 { 00:29:34.449 "name": "BaseBdev4", 00:29:34.449 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:34.449 "is_configured": true, 00:29:34.449 "data_offset": 2048, 00:29:34.449 "data_size": 63488 00:29:34.449 } 00:29:34.449 ] 00:29:34.449 }' 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:34.449 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.708 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:29:34.708 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.708 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.708 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.967 [2024-12-06 18:28:05.669864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:34.967 "name": "Existed_Raid", 00:29:34.967 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:34.967 "strip_size_kb": 64, 00:29:34.967 "state": "configuring", 00:29:34.967 "raid_level": "raid0", 00:29:34.967 "superblock": true, 00:29:34.967 "num_base_bdevs": 4, 00:29:34.967 "num_base_bdevs_discovered": 2, 00:29:34.967 "num_base_bdevs_operational": 4, 00:29:34.967 "base_bdevs_list": [ 00:29:34.967 { 00:29:34.967 "name": null, 00:29:34.967 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:34.967 "is_configured": false, 00:29:34.967 "data_offset": 0, 00:29:34.967 "data_size": 63488 00:29:34.967 }, 00:29:34.967 { 00:29:34.967 "name": null, 00:29:34.967 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:34.967 "is_configured": false, 00:29:34.967 "data_offset": 0, 00:29:34.967 "data_size": 63488 00:29:34.967 }, 00:29:34.967 { 00:29:34.967 "name": "BaseBdev3", 00:29:34.967 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:34.967 "is_configured": true, 00:29:34.967 "data_offset": 2048, 00:29:34.967 "data_size": 63488 00:29:34.967 }, 00:29:34.967 { 00:29:34.967 "name": "BaseBdev4", 00:29:34.967 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:34.967 "is_configured": true, 00:29:34.967 "data_offset": 2048, 00:29:34.967 "data_size": 63488 00:29:34.967 } 00:29:34.967 ] 00:29:34.967 }' 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:34.967 18:28:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:35.538 [2024-12-06 18:28:06.257311] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.538 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:35.538 "name": "Existed_Raid", 00:29:35.538 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:35.538 "strip_size_kb": 64, 00:29:35.538 "state": "configuring", 00:29:35.538 "raid_level": "raid0", 00:29:35.538 "superblock": true, 00:29:35.538 "num_base_bdevs": 4, 00:29:35.538 "num_base_bdevs_discovered": 3, 00:29:35.538 "num_base_bdevs_operational": 4, 00:29:35.538 "base_bdevs_list": [ 00:29:35.538 { 00:29:35.538 "name": null, 00:29:35.538 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:35.538 "is_configured": false, 00:29:35.538 "data_offset": 0, 00:29:35.538 "data_size": 63488 00:29:35.538 }, 00:29:35.538 { 00:29:35.538 "name": "BaseBdev2", 00:29:35.538 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:35.538 "is_configured": true, 00:29:35.538 "data_offset": 2048, 00:29:35.538 "data_size": 63488 00:29:35.538 }, 00:29:35.538 { 00:29:35.538 "name": "BaseBdev3", 00:29:35.538 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:35.538 "is_configured": true, 00:29:35.538 "data_offset": 2048, 00:29:35.539 "data_size": 63488 00:29:35.539 }, 00:29:35.539 { 00:29:35.539 "name": "BaseBdev4", 00:29:35.539 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:35.539 "is_configured": true, 00:29:35.539 "data_offset": 2048, 00:29:35.539 "data_size": 63488 00:29:35.539 } 00:29:35.539 ] 00:29:35.539 }' 00:29:35.539 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:35.539 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:35.798 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1d1168c9-5343-4573-8371-7cfc87599b48 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.058 [2024-12-06 18:28:06.794639] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:29:36.058 [2024-12-06 18:28:06.794861] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:29:36.058 [2024-12-06 18:28:06.794876] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:36.058 [2024-12-06 18:28:06.795180] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:29:36.058 NewBaseBdev 00:29:36.058 [2024-12-06 18:28:06.795313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:29:36.058 [2024-12-06 18:28:06.795326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:29:36.058 [2024-12-06 18:28:06.795467] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.058 [ 00:29:36.058 { 00:29:36.058 "name": "NewBaseBdev", 00:29:36.058 "aliases": [ 00:29:36.058 "1d1168c9-5343-4573-8371-7cfc87599b48" 00:29:36.058 ], 00:29:36.058 "product_name": "Malloc disk", 00:29:36.058 "block_size": 512, 00:29:36.058 "num_blocks": 65536, 00:29:36.058 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:36.058 "assigned_rate_limits": { 00:29:36.058 "rw_ios_per_sec": 0, 00:29:36.058 "rw_mbytes_per_sec": 0, 00:29:36.058 "r_mbytes_per_sec": 0, 00:29:36.058 "w_mbytes_per_sec": 0 00:29:36.058 }, 00:29:36.058 "claimed": true, 00:29:36.058 "claim_type": "exclusive_write", 00:29:36.058 "zoned": false, 00:29:36.058 "supported_io_types": { 00:29:36.058 "read": true, 00:29:36.058 "write": true, 00:29:36.058 "unmap": true, 00:29:36.058 "flush": true, 00:29:36.058 "reset": true, 00:29:36.058 "nvme_admin": false, 00:29:36.058 "nvme_io": false, 00:29:36.058 "nvme_io_md": false, 00:29:36.058 "write_zeroes": true, 00:29:36.058 "zcopy": true, 00:29:36.058 "get_zone_info": false, 00:29:36.058 "zone_management": false, 00:29:36.058 "zone_append": false, 00:29:36.058 "compare": false, 00:29:36.058 "compare_and_write": false, 00:29:36.058 "abort": true, 00:29:36.058 "seek_hole": false, 00:29:36.058 "seek_data": false, 00:29:36.058 "copy": true, 00:29:36.058 "nvme_iov_md": false 00:29:36.058 }, 00:29:36.058 "memory_domains": [ 00:29:36.058 { 00:29:36.058 "dma_device_id": "system", 00:29:36.058 "dma_device_type": 1 00:29:36.058 }, 00:29:36.058 { 00:29:36.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.058 "dma_device_type": 2 00:29:36.058 } 00:29:36.058 ], 00:29:36.058 "driver_specific": {} 00:29:36.058 } 00:29:36.058 ] 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.058 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:36.058 "name": "Existed_Raid", 00:29:36.058 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:36.058 "strip_size_kb": 64, 00:29:36.058 "state": "online", 00:29:36.058 "raid_level": "raid0", 00:29:36.058 "superblock": true, 00:29:36.058 "num_base_bdevs": 4, 00:29:36.058 "num_base_bdevs_discovered": 4, 00:29:36.058 "num_base_bdevs_operational": 4, 00:29:36.058 "base_bdevs_list": [ 00:29:36.058 { 00:29:36.058 "name": "NewBaseBdev", 00:29:36.058 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:36.058 "is_configured": true, 00:29:36.058 "data_offset": 2048, 00:29:36.058 "data_size": 63488 00:29:36.058 }, 00:29:36.058 { 00:29:36.058 "name": "BaseBdev2", 00:29:36.058 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:36.058 "is_configured": true, 00:29:36.058 "data_offset": 2048, 00:29:36.058 "data_size": 63488 00:29:36.058 }, 00:29:36.058 { 00:29:36.058 "name": "BaseBdev3", 00:29:36.058 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:36.058 "is_configured": true, 00:29:36.058 "data_offset": 2048, 00:29:36.058 "data_size": 63488 00:29:36.058 }, 00:29:36.058 { 00:29:36.059 "name": "BaseBdev4", 00:29:36.059 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:36.059 "is_configured": true, 00:29:36.059 "data_offset": 2048, 00:29:36.059 "data_size": 63488 00:29:36.059 } 00:29:36.059 ] 00:29:36.059 }' 00:29:36.059 18:28:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:36.059 18:28:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.628 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:36.629 [2024-12-06 18:28:07.294391] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:36.629 "name": "Existed_Raid", 00:29:36.629 "aliases": [ 00:29:36.629 "78a392e1-4e62-4bfe-89ba-038096e26421" 00:29:36.629 ], 00:29:36.629 "product_name": "Raid Volume", 00:29:36.629 "block_size": 512, 00:29:36.629 "num_blocks": 253952, 00:29:36.629 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:36.629 "assigned_rate_limits": { 00:29:36.629 "rw_ios_per_sec": 0, 00:29:36.629 "rw_mbytes_per_sec": 0, 00:29:36.629 "r_mbytes_per_sec": 0, 00:29:36.629 "w_mbytes_per_sec": 0 00:29:36.629 }, 00:29:36.629 "claimed": false, 00:29:36.629 "zoned": false, 00:29:36.629 "supported_io_types": { 00:29:36.629 "read": true, 00:29:36.629 "write": true, 00:29:36.629 "unmap": true, 00:29:36.629 "flush": true, 00:29:36.629 "reset": true, 00:29:36.629 "nvme_admin": false, 00:29:36.629 "nvme_io": false, 00:29:36.629 "nvme_io_md": false, 00:29:36.629 "write_zeroes": true, 00:29:36.629 "zcopy": false, 00:29:36.629 "get_zone_info": false, 00:29:36.629 "zone_management": false, 00:29:36.629 "zone_append": false, 00:29:36.629 "compare": false, 00:29:36.629 "compare_and_write": false, 00:29:36.629 "abort": false, 00:29:36.629 "seek_hole": false, 00:29:36.629 "seek_data": false, 00:29:36.629 "copy": false, 00:29:36.629 "nvme_iov_md": false 00:29:36.629 }, 00:29:36.629 "memory_domains": [ 00:29:36.629 { 00:29:36.629 "dma_device_id": "system", 00:29:36.629 "dma_device_type": 1 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.629 "dma_device_type": 2 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "dma_device_id": "system", 00:29:36.629 "dma_device_type": 1 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.629 "dma_device_type": 2 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "dma_device_id": "system", 00:29:36.629 "dma_device_type": 1 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.629 "dma_device_type": 2 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "dma_device_id": "system", 00:29:36.629 "dma_device_type": 1 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:36.629 "dma_device_type": 2 00:29:36.629 } 00:29:36.629 ], 00:29:36.629 "driver_specific": { 00:29:36.629 "raid": { 00:29:36.629 "uuid": "78a392e1-4e62-4bfe-89ba-038096e26421", 00:29:36.629 "strip_size_kb": 64, 00:29:36.629 "state": "online", 00:29:36.629 "raid_level": "raid0", 00:29:36.629 "superblock": true, 00:29:36.629 "num_base_bdevs": 4, 00:29:36.629 "num_base_bdevs_discovered": 4, 00:29:36.629 "num_base_bdevs_operational": 4, 00:29:36.629 "base_bdevs_list": [ 00:29:36.629 { 00:29:36.629 "name": "NewBaseBdev", 00:29:36.629 "uuid": "1d1168c9-5343-4573-8371-7cfc87599b48", 00:29:36.629 "is_configured": true, 00:29:36.629 "data_offset": 2048, 00:29:36.629 "data_size": 63488 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "name": "BaseBdev2", 00:29:36.629 "uuid": "b008b089-e9f4-433f-9546-56d1929ec336", 00:29:36.629 "is_configured": true, 00:29:36.629 "data_offset": 2048, 00:29:36.629 "data_size": 63488 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "name": "BaseBdev3", 00:29:36.629 "uuid": "a1a482a7-73ba-4e69-a1ad-52ec9c553349", 00:29:36.629 "is_configured": true, 00:29:36.629 "data_offset": 2048, 00:29:36.629 "data_size": 63488 00:29:36.629 }, 00:29:36.629 { 00:29:36.629 "name": "BaseBdev4", 00:29:36.629 "uuid": "3b519c30-a034-4aa5-8f24-47fd6b280802", 00:29:36.629 "is_configured": true, 00:29:36.629 "data_offset": 2048, 00:29:36.629 "data_size": 63488 00:29:36.629 } 00:29:36.629 ] 00:29:36.629 } 00:29:36.629 } 00:29:36.629 }' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:29:36.629 BaseBdev2 00:29:36.629 BaseBdev3 00:29:36.629 BaseBdev4' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.629 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:36.889 [2024-12-06 18:28:07.613772] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:36.889 [2024-12-06 18:28:07.613950] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:36.889 [2024-12-06 18:28:07.614044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:36.889 [2024-12-06 18:28:07.614116] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:36.889 [2024-12-06 18:28:07.614128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 69775 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 69775 ']' 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 69775 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69775 00:29:36.889 killing process with pid 69775 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69775' 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 69775 00:29:36.889 [2024-12-06 18:28:07.670211] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:36.889 18:28:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 69775 00:29:37.150 [2024-12-06 18:28:08.071463] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:38.530 18:28:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:29:38.530 00:29:38.530 real 0m11.629s 00:29:38.530 user 0m18.335s 00:29:38.530 sys 0m2.494s 00:29:38.530 18:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.530 ************************************ 00:29:38.530 END TEST raid_state_function_test_sb 00:29:38.530 ************************************ 00:29:38.530 18:28:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:29:38.530 18:28:09 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:29:38.530 18:28:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:38.530 18:28:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.530 18:28:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:38.530 ************************************ 00:29:38.530 START TEST raid_superblock_test 00:29:38.530 ************************************ 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:29:38.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70445 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70445 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70445 ']' 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.530 18:28:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:38.530 [2024-12-06 18:28:09.412755] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:38.530 [2024-12-06 18:28:09.412909] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70445 ] 00:29:38.790 [2024-12-06 18:28:09.600113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.791 [2024-12-06 18:28:09.720796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:39.050 [2024-12-06 18:28:09.955011] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:39.050 [2024-12-06 18:28:09.955080] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:39.620 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:39.620 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:29:39.620 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:29:39.620 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.621 malloc1 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.621 [2024-12-06 18:28:10.353746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:39.621 [2024-12-06 18:28:10.354106] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.621 [2024-12-06 18:28:10.354206] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:29:39.621 [2024-12-06 18:28:10.354485] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.621 [2024-12-06 18:28:10.357912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.621 [2024-12-06 18:28:10.358115] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:39.621 pt1 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.621 malloc2 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.621 [2024-12-06 18:28:10.421555] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:39.621 [2024-12-06 18:28:10.421893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.621 [2024-12-06 18:28:10.421950] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:29:39.621 [2024-12-06 18:28:10.421964] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.621 [2024-12-06 18:28:10.425085] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.621 [2024-12-06 18:28:10.425296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:39.621 pt2 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.621 malloc3 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.621 [2024-12-06 18:28:10.504308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:39.621 [2024-12-06 18:28:10.504604] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.621 [2024-12-06 18:28:10.504696] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:29:39.621 [2024-12-06 18:28:10.504960] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.621 [2024-12-06 18:28:10.508325] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.621 [2024-12-06 18:28:10.508529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:39.621 pt3 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.621 malloc4 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.621 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.882 [2024-12-06 18:28:10.571775] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:39.882 [2024-12-06 18:28:10.571890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:39.882 [2024-12-06 18:28:10.571924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:29:39.882 [2024-12-06 18:28:10.571937] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:39.882 [2024-12-06 18:28:10.575093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:39.882 [2024-12-06 18:28:10.575387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:39.882 pt4 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.882 [2024-12-06 18:28:10.583886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:39.882 [2024-12-06 18:28:10.586681] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:39.882 [2024-12-06 18:28:10.587029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:39.882 [2024-12-06 18:28:10.587094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:39.882 [2024-12-06 18:28:10.587354] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:29:39.882 [2024-12-06 18:28:10.587370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:39.882 [2024-12-06 18:28:10.587740] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:39.882 [2024-12-06 18:28:10.587952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:29:39.882 [2024-12-06 18:28:10.587968] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:29:39.882 [2024-12-06 18:28:10.588272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:39.882 "name": "raid_bdev1", 00:29:39.882 "uuid": "dfea0151-5351-4367-8565-a57ea58f0131", 00:29:39.882 "strip_size_kb": 64, 00:29:39.882 "state": "online", 00:29:39.882 "raid_level": "raid0", 00:29:39.882 "superblock": true, 00:29:39.882 "num_base_bdevs": 4, 00:29:39.882 "num_base_bdevs_discovered": 4, 00:29:39.882 "num_base_bdevs_operational": 4, 00:29:39.882 "base_bdevs_list": [ 00:29:39.882 { 00:29:39.882 "name": "pt1", 00:29:39.882 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:39.882 "is_configured": true, 00:29:39.882 "data_offset": 2048, 00:29:39.882 "data_size": 63488 00:29:39.882 }, 00:29:39.882 { 00:29:39.882 "name": "pt2", 00:29:39.882 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:39.882 "is_configured": true, 00:29:39.882 "data_offset": 2048, 00:29:39.882 "data_size": 63488 00:29:39.882 }, 00:29:39.882 { 00:29:39.882 "name": "pt3", 00:29:39.882 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:39.882 "is_configured": true, 00:29:39.882 "data_offset": 2048, 00:29:39.882 "data_size": 63488 00:29:39.882 }, 00:29:39.882 { 00:29:39.882 "name": "pt4", 00:29:39.882 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:39.882 "is_configured": true, 00:29:39.882 "data_offset": 2048, 00:29:39.882 "data_size": 63488 00:29:39.882 } 00:29:39.882 ] 00:29:39.882 }' 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:39.882 18:28:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.142 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.143 [2024-12-06 18:28:11.039971] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:40.143 "name": "raid_bdev1", 00:29:40.143 "aliases": [ 00:29:40.143 "dfea0151-5351-4367-8565-a57ea58f0131" 00:29:40.143 ], 00:29:40.143 "product_name": "Raid Volume", 00:29:40.143 "block_size": 512, 00:29:40.143 "num_blocks": 253952, 00:29:40.143 "uuid": "dfea0151-5351-4367-8565-a57ea58f0131", 00:29:40.143 "assigned_rate_limits": { 00:29:40.143 "rw_ios_per_sec": 0, 00:29:40.143 "rw_mbytes_per_sec": 0, 00:29:40.143 "r_mbytes_per_sec": 0, 00:29:40.143 "w_mbytes_per_sec": 0 00:29:40.143 }, 00:29:40.143 "claimed": false, 00:29:40.143 "zoned": false, 00:29:40.143 "supported_io_types": { 00:29:40.143 "read": true, 00:29:40.143 "write": true, 00:29:40.143 "unmap": true, 00:29:40.143 "flush": true, 00:29:40.143 "reset": true, 00:29:40.143 "nvme_admin": false, 00:29:40.143 "nvme_io": false, 00:29:40.143 "nvme_io_md": false, 00:29:40.143 "write_zeroes": true, 00:29:40.143 "zcopy": false, 00:29:40.143 "get_zone_info": false, 00:29:40.143 "zone_management": false, 00:29:40.143 "zone_append": false, 00:29:40.143 "compare": false, 00:29:40.143 "compare_and_write": false, 00:29:40.143 "abort": false, 00:29:40.143 "seek_hole": false, 00:29:40.143 "seek_data": false, 00:29:40.143 "copy": false, 00:29:40.143 "nvme_iov_md": false 00:29:40.143 }, 00:29:40.143 "memory_domains": [ 00:29:40.143 { 00:29:40.143 "dma_device_id": "system", 00:29:40.143 "dma_device_type": 1 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:40.143 "dma_device_type": 2 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "dma_device_id": "system", 00:29:40.143 "dma_device_type": 1 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:40.143 "dma_device_type": 2 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "dma_device_id": "system", 00:29:40.143 "dma_device_type": 1 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:40.143 "dma_device_type": 2 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "dma_device_id": "system", 00:29:40.143 "dma_device_type": 1 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:40.143 "dma_device_type": 2 00:29:40.143 } 00:29:40.143 ], 00:29:40.143 "driver_specific": { 00:29:40.143 "raid": { 00:29:40.143 "uuid": "dfea0151-5351-4367-8565-a57ea58f0131", 00:29:40.143 "strip_size_kb": 64, 00:29:40.143 "state": "online", 00:29:40.143 "raid_level": "raid0", 00:29:40.143 "superblock": true, 00:29:40.143 "num_base_bdevs": 4, 00:29:40.143 "num_base_bdevs_discovered": 4, 00:29:40.143 "num_base_bdevs_operational": 4, 00:29:40.143 "base_bdevs_list": [ 00:29:40.143 { 00:29:40.143 "name": "pt1", 00:29:40.143 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:40.143 "is_configured": true, 00:29:40.143 "data_offset": 2048, 00:29:40.143 "data_size": 63488 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "name": "pt2", 00:29:40.143 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:40.143 "is_configured": true, 00:29:40.143 "data_offset": 2048, 00:29:40.143 "data_size": 63488 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "name": "pt3", 00:29:40.143 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:40.143 "is_configured": true, 00:29:40.143 "data_offset": 2048, 00:29:40.143 "data_size": 63488 00:29:40.143 }, 00:29:40.143 { 00:29:40.143 "name": "pt4", 00:29:40.143 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:40.143 "is_configured": true, 00:29:40.143 "data_offset": 2048, 00:29:40.143 "data_size": 63488 00:29:40.143 } 00:29:40.143 ] 00:29:40.143 } 00:29:40.143 } 00:29:40.143 }' 00:29:40.143 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:40.404 pt2 00:29:40.404 pt3 00:29:40.404 pt4' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.404 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.404 [2024-12-06 18:28:11.347622] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=dfea0151-5351-4367-8565-a57ea58f0131 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z dfea0151-5351-4367-8565-a57ea58f0131 ']' 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.665 [2024-12-06 18:28:11.391204] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:40.665 [2024-12-06 18:28:11.391261] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:40.665 [2024-12-06 18:28:11.391395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:40.665 [2024-12-06 18:28:11.391485] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:40.665 [2024-12-06 18:28:11.391508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.665 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.665 [2024-12-06 18:28:11.550977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:29:40.665 [2024-12-06 18:28:11.553740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:29:40.665 [2024-12-06 18:28:11.554021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:29:40.665 [2024-12-06 18:28:11.554078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:29:40.665 [2024-12-06 18:28:11.554173] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:29:40.666 [2024-12-06 18:28:11.554243] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:29:40.666 [2024-12-06 18:28:11.554268] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:29:40.666 [2024-12-06 18:28:11.554293] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:29:40.666 [2024-12-06 18:28:11.554312] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:40.666 [2024-12-06 18:28:11.554332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:29:40.666 request: 00:29:40.666 { 00:29:40.666 "name": "raid_bdev1", 00:29:40.666 "raid_level": "raid0", 00:29:40.666 "base_bdevs": [ 00:29:40.666 "malloc1", 00:29:40.666 "malloc2", 00:29:40.666 "malloc3", 00:29:40.666 "malloc4" 00:29:40.666 ], 00:29:40.666 "strip_size_kb": 64, 00:29:40.666 "superblock": false, 00:29:40.666 "method": "bdev_raid_create", 00:29:40.666 "req_id": 1 00:29:40.666 } 00:29:40.666 Got JSON-RPC error response 00:29:40.666 response: 00:29:40.666 { 00:29:40.666 "code": -17, 00:29:40.666 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:29:40.666 } 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.666 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.666 [2024-12-06 18:28:11.610883] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:29:40.666 [2024-12-06 18:28:11.611004] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:40.666 [2024-12-06 18:28:11.611039] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:40.666 [2024-12-06 18:28:11.611056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:40.926 [2024-12-06 18:28:11.614317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:40.926 [2024-12-06 18:28:11.614387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:29:40.926 [2024-12-06 18:28:11.614526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:29:40.926 [2024-12-06 18:28:11.614605] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:29:40.926 pt1 00:29:40.926 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.926 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:29:40.926 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:40.926 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:40.926 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:40.926 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:40.926 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:40.926 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:40.927 "name": "raid_bdev1", 00:29:40.927 "uuid": "dfea0151-5351-4367-8565-a57ea58f0131", 00:29:40.927 "strip_size_kb": 64, 00:29:40.927 "state": "configuring", 00:29:40.927 "raid_level": "raid0", 00:29:40.927 "superblock": true, 00:29:40.927 "num_base_bdevs": 4, 00:29:40.927 "num_base_bdevs_discovered": 1, 00:29:40.927 "num_base_bdevs_operational": 4, 00:29:40.927 "base_bdevs_list": [ 00:29:40.927 { 00:29:40.927 "name": "pt1", 00:29:40.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:40.927 "is_configured": true, 00:29:40.927 "data_offset": 2048, 00:29:40.927 "data_size": 63488 00:29:40.927 }, 00:29:40.927 { 00:29:40.927 "name": null, 00:29:40.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:40.927 "is_configured": false, 00:29:40.927 "data_offset": 2048, 00:29:40.927 "data_size": 63488 00:29:40.927 }, 00:29:40.927 { 00:29:40.927 "name": null, 00:29:40.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:40.927 "is_configured": false, 00:29:40.927 "data_offset": 2048, 00:29:40.927 "data_size": 63488 00:29:40.927 }, 00:29:40.927 { 00:29:40.927 "name": null, 00:29:40.927 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:40.927 "is_configured": false, 00:29:40.927 "data_offset": 2048, 00:29:40.927 "data_size": 63488 00:29:40.927 } 00:29:40.927 ] 00:29:40.927 }' 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:40.927 18:28:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.188 [2024-12-06 18:28:12.058349] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:41.188 [2024-12-06 18:28:12.058474] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.188 [2024-12-06 18:28:12.058503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:29:41.188 [2024-12-06 18:28:12.058521] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.188 [2024-12-06 18:28:12.059133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.188 [2024-12-06 18:28:12.059177] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:41.188 [2024-12-06 18:28:12.059293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:41.188 [2024-12-06 18:28:12.059331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:41.188 pt2 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.188 [2024-12-06 18:28:12.066367] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:41.188 "name": "raid_bdev1", 00:29:41.188 "uuid": "dfea0151-5351-4367-8565-a57ea58f0131", 00:29:41.188 "strip_size_kb": 64, 00:29:41.188 "state": "configuring", 00:29:41.188 "raid_level": "raid0", 00:29:41.188 "superblock": true, 00:29:41.188 "num_base_bdevs": 4, 00:29:41.188 "num_base_bdevs_discovered": 1, 00:29:41.188 "num_base_bdevs_operational": 4, 00:29:41.188 "base_bdevs_list": [ 00:29:41.188 { 00:29:41.188 "name": "pt1", 00:29:41.188 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:41.188 "is_configured": true, 00:29:41.188 "data_offset": 2048, 00:29:41.188 "data_size": 63488 00:29:41.188 }, 00:29:41.188 { 00:29:41.188 "name": null, 00:29:41.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:41.188 "is_configured": false, 00:29:41.188 "data_offset": 0, 00:29:41.188 "data_size": 63488 00:29:41.188 }, 00:29:41.188 { 00:29:41.188 "name": null, 00:29:41.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:41.188 "is_configured": false, 00:29:41.188 "data_offset": 2048, 00:29:41.188 "data_size": 63488 00:29:41.188 }, 00:29:41.188 { 00:29:41.188 "name": null, 00:29:41.188 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:41.188 "is_configured": false, 00:29:41.188 "data_offset": 2048, 00:29:41.188 "data_size": 63488 00:29:41.188 } 00:29:41.188 ] 00:29:41.188 }' 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:41.188 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.758 [2024-12-06 18:28:12.497893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:29:41.758 [2024-12-06 18:28:12.498029] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.758 [2024-12-06 18:28:12.498062] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:29:41.758 [2024-12-06 18:28:12.498075] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.758 [2024-12-06 18:28:12.498726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.758 [2024-12-06 18:28:12.498756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:29:41.758 [2024-12-06 18:28:12.498873] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:29:41.758 [2024-12-06 18:28:12.498904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:29:41.758 pt2 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.758 [2024-12-06 18:28:12.509902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:29:41.758 [2024-12-06 18:28:12.510010] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.758 [2024-12-06 18:28:12.510042] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:29:41.758 [2024-12-06 18:28:12.510056] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.758 [2024-12-06 18:28:12.510693] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.758 [2024-12-06 18:28:12.510726] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:29:41.758 [2024-12-06 18:28:12.510851] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:29:41.758 [2024-12-06 18:28:12.510904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:29:41.758 pt3 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.758 [2024-12-06 18:28:12.521875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:29:41.758 [2024-12-06 18:28:12.521978] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:41.758 [2024-12-06 18:28:12.522009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:29:41.758 [2024-12-06 18:28:12.522023] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:41.758 [2024-12-06 18:28:12.522673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:41.758 [2024-12-06 18:28:12.522714] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:29:41.758 [2024-12-06 18:28:12.522843] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:29:41.758 [2024-12-06 18:28:12.522879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:29:41.758 [2024-12-06 18:28:12.523053] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:41.758 [2024-12-06 18:28:12.523065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:41.758 [2024-12-06 18:28:12.523432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:29:41.758 [2024-12-06 18:28:12.523705] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:41.758 [2024-12-06 18:28:12.523727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:29:41.758 [2024-12-06 18:28:12.523902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:41.758 pt4 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:41.758 "name": "raid_bdev1", 00:29:41.758 "uuid": "dfea0151-5351-4367-8565-a57ea58f0131", 00:29:41.758 "strip_size_kb": 64, 00:29:41.758 "state": "online", 00:29:41.758 "raid_level": "raid0", 00:29:41.758 "superblock": true, 00:29:41.758 "num_base_bdevs": 4, 00:29:41.758 "num_base_bdevs_discovered": 4, 00:29:41.758 "num_base_bdevs_operational": 4, 00:29:41.758 "base_bdevs_list": [ 00:29:41.758 { 00:29:41.758 "name": "pt1", 00:29:41.758 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:41.758 "is_configured": true, 00:29:41.758 "data_offset": 2048, 00:29:41.758 "data_size": 63488 00:29:41.758 }, 00:29:41.758 { 00:29:41.758 "name": "pt2", 00:29:41.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:41.758 "is_configured": true, 00:29:41.758 "data_offset": 2048, 00:29:41.758 "data_size": 63488 00:29:41.758 }, 00:29:41.758 { 00:29:41.758 "name": "pt3", 00:29:41.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:41.758 "is_configured": true, 00:29:41.758 "data_offset": 2048, 00:29:41.758 "data_size": 63488 00:29:41.758 }, 00:29:41.758 { 00:29:41.758 "name": "pt4", 00:29:41.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:41.758 "is_configured": true, 00:29:41.758 "data_offset": 2048, 00:29:41.758 "data_size": 63488 00:29:41.758 } 00:29:41.758 ] 00:29:41.758 }' 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:41.758 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:42.018 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.018 [2024-12-06 18:28:12.950266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:42.277 18:28:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.277 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:42.277 "name": "raid_bdev1", 00:29:42.277 "aliases": [ 00:29:42.277 "dfea0151-5351-4367-8565-a57ea58f0131" 00:29:42.277 ], 00:29:42.277 "product_name": "Raid Volume", 00:29:42.277 "block_size": 512, 00:29:42.277 "num_blocks": 253952, 00:29:42.277 "uuid": "dfea0151-5351-4367-8565-a57ea58f0131", 00:29:42.277 "assigned_rate_limits": { 00:29:42.277 "rw_ios_per_sec": 0, 00:29:42.277 "rw_mbytes_per_sec": 0, 00:29:42.277 "r_mbytes_per_sec": 0, 00:29:42.277 "w_mbytes_per_sec": 0 00:29:42.277 }, 00:29:42.277 "claimed": false, 00:29:42.277 "zoned": false, 00:29:42.277 "supported_io_types": { 00:29:42.277 "read": true, 00:29:42.277 "write": true, 00:29:42.277 "unmap": true, 00:29:42.277 "flush": true, 00:29:42.277 "reset": true, 00:29:42.277 "nvme_admin": false, 00:29:42.277 "nvme_io": false, 00:29:42.277 "nvme_io_md": false, 00:29:42.277 "write_zeroes": true, 00:29:42.277 "zcopy": false, 00:29:42.277 "get_zone_info": false, 00:29:42.277 "zone_management": false, 00:29:42.277 "zone_append": false, 00:29:42.278 "compare": false, 00:29:42.278 "compare_and_write": false, 00:29:42.278 "abort": false, 00:29:42.278 "seek_hole": false, 00:29:42.278 "seek_data": false, 00:29:42.278 "copy": false, 00:29:42.278 "nvme_iov_md": false 00:29:42.278 }, 00:29:42.278 "memory_domains": [ 00:29:42.278 { 00:29:42.278 "dma_device_id": "system", 00:29:42.278 "dma_device_type": 1 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:42.278 "dma_device_type": 2 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "dma_device_id": "system", 00:29:42.278 "dma_device_type": 1 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:42.278 "dma_device_type": 2 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "dma_device_id": "system", 00:29:42.278 "dma_device_type": 1 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:42.278 "dma_device_type": 2 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "dma_device_id": "system", 00:29:42.278 "dma_device_type": 1 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:42.278 "dma_device_type": 2 00:29:42.278 } 00:29:42.278 ], 00:29:42.278 "driver_specific": { 00:29:42.278 "raid": { 00:29:42.278 "uuid": "dfea0151-5351-4367-8565-a57ea58f0131", 00:29:42.278 "strip_size_kb": 64, 00:29:42.278 "state": "online", 00:29:42.278 "raid_level": "raid0", 00:29:42.278 "superblock": true, 00:29:42.278 "num_base_bdevs": 4, 00:29:42.278 "num_base_bdevs_discovered": 4, 00:29:42.278 "num_base_bdevs_operational": 4, 00:29:42.278 "base_bdevs_list": [ 00:29:42.278 { 00:29:42.278 "name": "pt1", 00:29:42.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:29:42.278 "is_configured": true, 00:29:42.278 "data_offset": 2048, 00:29:42.278 "data_size": 63488 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "name": "pt2", 00:29:42.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:29:42.278 "is_configured": true, 00:29:42.278 "data_offset": 2048, 00:29:42.278 "data_size": 63488 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "name": "pt3", 00:29:42.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:29:42.278 "is_configured": true, 00:29:42.278 "data_offset": 2048, 00:29:42.278 "data_size": 63488 00:29:42.278 }, 00:29:42.278 { 00:29:42.278 "name": "pt4", 00:29:42.278 "uuid": "00000000-0000-0000-0000-000000000004", 00:29:42.278 "is_configured": true, 00:29:42.278 "data_offset": 2048, 00:29:42.278 "data_size": 63488 00:29:42.278 } 00:29:42.278 ] 00:29:42.278 } 00:29:42.278 } 00:29:42.278 }' 00:29:42.278 18:28:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:29:42.278 pt2 00:29:42.278 pt3 00:29:42.278 pt4' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.278 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:29:42.537 [2024-12-06 18:28:13.278201] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' dfea0151-5351-4367-8565-a57ea58f0131 '!=' dfea0151-5351-4367-8565-a57ea58f0131 ']' 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70445 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70445 ']' 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70445 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70445 00:29:42.537 killing process with pid 70445 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70445' 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70445 00:29:42.537 [2024-12-06 18:28:13.359994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:42.537 18:28:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70445 00:29:42.537 [2024-12-06 18:28:13.360131] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:42.537 [2024-12-06 18:28:13.360242] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:42.537 [2024-12-06 18:28:13.360256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:29:43.103 [2024-12-06 18:28:13.809976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:44.479 18:28:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:29:44.479 00:29:44.479 real 0m5.774s 00:29:44.479 user 0m8.038s 00:29:44.479 sys 0m1.219s 00:29:44.479 18:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.479 18:28:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.479 ************************************ 00:29:44.479 END TEST raid_superblock_test 00:29:44.479 ************************************ 00:29:44.479 18:28:15 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:29:44.479 18:28:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:44.479 18:28:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:44.479 18:28:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:44.479 ************************************ 00:29:44.479 START TEST raid_read_error_test 00:29:44.479 ************************************ 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:44.479 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.AVaSO7W3wh 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70710 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70710 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70710 ']' 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:44.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:44.480 18:28:15 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:44.480 [2024-12-06 18:28:15.283230] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:44.480 [2024-12-06 18:28:15.283358] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70710 ] 00:29:44.737 [2024-12-06 18:28:15.466471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.737 [2024-12-06 18:28:15.612398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.994 [2024-12-06 18:28:15.862569] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:44.994 [2024-12-06 18:28:15.862664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:45.254 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.254 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:29:45.254 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:45.254 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:45.254 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.254 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 BaseBdev1_malloc 00:29:45.511 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.511 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:45.511 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.511 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.511 true 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.512 [2024-12-06 18:28:16.224485] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:45.512 [2024-12-06 18:28:16.224832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:45.512 [2024-12-06 18:28:16.224877] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:45.512 [2024-12-06 18:28:16.224896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:45.512 [2024-12-06 18:28:16.228010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:45.512 [2024-12-06 18:28:16.228263] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:45.512 BaseBdev1 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.512 BaseBdev2_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.512 true 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.512 [2024-12-06 18:28:16.301716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:45.512 [2024-12-06 18:28:16.301821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:45.512 [2024-12-06 18:28:16.301850] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:45.512 [2024-12-06 18:28:16.301867] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:45.512 [2024-12-06 18:28:16.304901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:45.512 [2024-12-06 18:28:16.304962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:45.512 BaseBdev2 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.512 BaseBdev3_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.512 true 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.512 [2024-12-06 18:28:16.392258] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:45.512 [2024-12-06 18:28:16.392358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:45.512 [2024-12-06 18:28:16.392387] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:45.512 [2024-12-06 18:28:16.392403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:45.512 [2024-12-06 18:28:16.395525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:45.512 [2024-12-06 18:28:16.395579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:45.512 BaseBdev3 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.512 BaseBdev4_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.512 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.770 true 00:29:45.770 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.770 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:29:45.770 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.770 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.770 [2024-12-06 18:28:16.468746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:29:45.770 [2024-12-06 18:28:16.468850] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:45.770 [2024-12-06 18:28:16.468881] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:45.770 [2024-12-06 18:28:16.468898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:45.770 [2024-12-06 18:28:16.471913] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:45.770 [2024-12-06 18:28:16.471974] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:45.770 BaseBdev4 00:29:45.770 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.771 [2024-12-06 18:28:16.480946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:45.771 [2024-12-06 18:28:16.483673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:45.771 [2024-12-06 18:28:16.483788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:45.771 [2024-12-06 18:28:16.483866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:45.771 [2024-12-06 18:28:16.484138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:29:45.771 [2024-12-06 18:28:16.484176] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:45.771 [2024-12-06 18:28:16.484543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:29:45.771 [2024-12-06 18:28:16.484750] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:29:45.771 [2024-12-06 18:28:16.484764] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:29:45.771 [2024-12-06 18:28:16.485054] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.771 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:45.771 "name": "raid_bdev1", 00:29:45.771 "uuid": "3d445675-d458-4fc8-a310-68e5104f9db3", 00:29:45.771 "strip_size_kb": 64, 00:29:45.771 "state": "online", 00:29:45.771 "raid_level": "raid0", 00:29:45.771 "superblock": true, 00:29:45.771 "num_base_bdevs": 4, 00:29:45.771 "num_base_bdevs_discovered": 4, 00:29:45.771 "num_base_bdevs_operational": 4, 00:29:45.771 "base_bdevs_list": [ 00:29:45.771 { 00:29:45.771 "name": "BaseBdev1", 00:29:45.771 "uuid": "22edc5fa-f70e-52fe-aba5-e60b32d9efd8", 00:29:45.771 "is_configured": true, 00:29:45.771 "data_offset": 2048, 00:29:45.771 "data_size": 63488 00:29:45.771 }, 00:29:45.771 { 00:29:45.771 "name": "BaseBdev2", 00:29:45.772 "uuid": "160ce159-3c26-5120-b120-c313bf836cc9", 00:29:45.772 "is_configured": true, 00:29:45.772 "data_offset": 2048, 00:29:45.772 "data_size": 63488 00:29:45.772 }, 00:29:45.772 { 00:29:45.772 "name": "BaseBdev3", 00:29:45.772 "uuid": "efa82945-fd87-58d3-b468-eefb11a1c810", 00:29:45.772 "is_configured": true, 00:29:45.772 "data_offset": 2048, 00:29:45.772 "data_size": 63488 00:29:45.772 }, 00:29:45.772 { 00:29:45.772 "name": "BaseBdev4", 00:29:45.772 "uuid": "f867e0d8-acf4-57d0-b31d-7fce3995f42b", 00:29:45.772 "is_configured": true, 00:29:45.772 "data_offset": 2048, 00:29:45.772 "data_size": 63488 00:29:45.772 } 00:29:45.772 ] 00:29:45.772 }' 00:29:45.772 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:45.772 18:28:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:46.030 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:46.030 18:28:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:46.288 [2024-12-06 18:28:17.002151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.226 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:47.226 "name": "raid_bdev1", 00:29:47.226 "uuid": "3d445675-d458-4fc8-a310-68e5104f9db3", 00:29:47.226 "strip_size_kb": 64, 00:29:47.226 "state": "online", 00:29:47.226 "raid_level": "raid0", 00:29:47.226 "superblock": true, 00:29:47.226 "num_base_bdevs": 4, 00:29:47.226 "num_base_bdevs_discovered": 4, 00:29:47.226 "num_base_bdevs_operational": 4, 00:29:47.226 "base_bdevs_list": [ 00:29:47.226 { 00:29:47.226 "name": "BaseBdev1", 00:29:47.226 "uuid": "22edc5fa-f70e-52fe-aba5-e60b32d9efd8", 00:29:47.226 "is_configured": true, 00:29:47.226 "data_offset": 2048, 00:29:47.226 "data_size": 63488 00:29:47.226 }, 00:29:47.226 { 00:29:47.226 "name": "BaseBdev2", 00:29:47.226 "uuid": "160ce159-3c26-5120-b120-c313bf836cc9", 00:29:47.226 "is_configured": true, 00:29:47.226 "data_offset": 2048, 00:29:47.226 "data_size": 63488 00:29:47.227 }, 00:29:47.227 { 00:29:47.227 "name": "BaseBdev3", 00:29:47.227 "uuid": "efa82945-fd87-58d3-b468-eefb11a1c810", 00:29:47.227 "is_configured": true, 00:29:47.227 "data_offset": 2048, 00:29:47.227 "data_size": 63488 00:29:47.227 }, 00:29:47.227 { 00:29:47.227 "name": "BaseBdev4", 00:29:47.227 "uuid": "f867e0d8-acf4-57d0-b31d-7fce3995f42b", 00:29:47.227 "is_configured": true, 00:29:47.227 "data_offset": 2048, 00:29:47.227 "data_size": 63488 00:29:47.227 } 00:29:47.227 ] 00:29:47.227 }' 00:29:47.227 18:28:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:47.227 18:28:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:47.487 [2024-12-06 18:28:18.319289] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:47.487 [2024-12-06 18:28:18.319336] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:47.487 [2024-12-06 18:28:18.322012] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:47.487 [2024-12-06 18:28:18.322085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:47.487 [2024-12-06 18:28:18.322133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:47.487 [2024-12-06 18:28:18.322161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:29:47.487 { 00:29:47.487 "results": [ 00:29:47.487 { 00:29:47.487 "job": "raid_bdev1", 00:29:47.487 "core_mask": "0x1", 00:29:47.487 "workload": "randrw", 00:29:47.487 "percentage": 50, 00:29:47.487 "status": "finished", 00:29:47.487 "queue_depth": 1, 00:29:47.487 "io_size": 131072, 00:29:47.487 "runtime": 1.316541, 00:29:47.487 "iops": 13976.017457868764, 00:29:47.487 "mibps": 1747.0021822335955, 00:29:47.487 "io_failed": 1, 00:29:47.487 "io_timeout": 0, 00:29:47.487 "avg_latency_us": 99.75457017461729, 00:29:47.487 "min_latency_us": 27.347791164658634, 00:29:47.487 "max_latency_us": 1388.3630522088354 00:29:47.487 } 00:29:47.487 ], 00:29:47.487 "core_count": 1 00:29:47.487 } 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70710 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70710 ']' 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70710 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70710 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.487 killing process with pid 70710 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70710' 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70710 00:29:47.487 [2024-12-06 18:28:18.373990] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:47.487 18:28:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70710 00:29:48.056 [2024-12-06 18:28:18.701360] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:48.994 18:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.AVaSO7W3wh 00:29:48.994 18:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:48.994 18:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:48.994 18:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.76 00:29:48.994 18:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:29:48.994 18:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:48.994 18:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:48.994 18:28:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.76 != \0\.\0\0 ]] 00:29:48.994 00:29:48.994 real 0m4.750s 00:29:48.995 user 0m5.378s 00:29:48.995 sys 0m0.817s 00:29:48.995 18:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.995 18:28:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:48.995 ************************************ 00:29:48.995 END TEST raid_read_error_test 00:29:48.995 ************************************ 00:29:49.254 18:28:19 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:29:49.254 18:28:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:49.254 18:28:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.254 18:28:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:49.254 ************************************ 00:29:49.254 START TEST raid_write_error_test 00:29:49.254 ************************************ 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:29:49.254 18:28:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MaPmzOjNGF 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70861 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70861 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70861 ']' 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:49.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:49.254 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:49.254 [2024-12-06 18:28:20.108228] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:49.254 [2024-12-06 18:28:20.108350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70861 ] 00:29:49.515 [2024-12-06 18:28:20.286726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:49.515 [2024-12-06 18:28:20.401536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.774 [2024-12-06 18:28:20.614450] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:49.774 [2024-12-06 18:28:20.614500] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:50.034 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:50.034 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:29:50.034 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:50.034 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:29:50.034 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.034 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 BaseBdev1_malloc 00:29:50.295 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.295 18:28:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:29:50.295 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.295 18:28:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 true 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 [2024-12-06 18:28:21.010598] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:29:50.295 [2024-12-06 18:28:21.010655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:50.295 [2024-12-06 18:28:21.010677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:29:50.295 [2024-12-06 18:28:21.010692] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:50.295 [2024-12-06 18:28:21.013010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:50.295 [2024-12-06 18:28:21.013053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:29:50.295 BaseBdev1 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 BaseBdev2_malloc 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 true 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 [2024-12-06 18:28:21.078720] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:29:50.295 [2024-12-06 18:28:21.078773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:50.295 [2024-12-06 18:28:21.078792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:29:50.295 [2024-12-06 18:28:21.078805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:50.295 [2024-12-06 18:28:21.081140] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:50.295 [2024-12-06 18:28:21.081193] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:29:50.295 BaseBdev2 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 BaseBdev3_malloc 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 true 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.295 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.295 [2024-12-06 18:28:21.157094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:29:50.295 [2024-12-06 18:28:21.157156] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:50.295 [2024-12-06 18:28:21.157176] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:29:50.295 [2024-12-06 18:28:21.157190] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:50.295 [2024-12-06 18:28:21.159569] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:50.295 [2024-12-06 18:28:21.159610] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:29:50.295 BaseBdev3 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.296 BaseBdev4_malloc 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.296 true 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.296 [2024-12-06 18:28:21.225916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:29:50.296 [2024-12-06 18:28:21.225965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:29:50.296 [2024-12-06 18:28:21.225984] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:29:50.296 [2024-12-06 18:28:21.225998] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:29:50.296 [2024-12-06 18:28:21.228341] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:29:50.296 [2024-12-06 18:28:21.228382] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:29:50.296 BaseBdev4 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.296 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.296 [2024-12-06 18:28:21.237962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:50.296 [2024-12-06 18:28:21.240013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:50.296 [2024-12-06 18:28:21.240090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:50.296 [2024-12-06 18:28:21.240166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:50.296 [2024-12-06 18:28:21.240375] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:29:50.296 [2024-12-06 18:28:21.240396] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:29:50.296 [2024-12-06 18:28:21.240644] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:29:50.296 [2024-12-06 18:28:21.240814] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:29:50.296 [2024-12-06 18:28:21.240829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:29:50.296 [2024-12-06 18:28:21.240975] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:50.555 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:50.555 "name": "raid_bdev1", 00:29:50.556 "uuid": "08abecb4-09c7-4fc1-bd71-37c79836b2bf", 00:29:50.556 "strip_size_kb": 64, 00:29:50.556 "state": "online", 00:29:50.556 "raid_level": "raid0", 00:29:50.556 "superblock": true, 00:29:50.556 "num_base_bdevs": 4, 00:29:50.556 "num_base_bdevs_discovered": 4, 00:29:50.556 "num_base_bdevs_operational": 4, 00:29:50.556 "base_bdevs_list": [ 00:29:50.556 { 00:29:50.556 "name": "BaseBdev1", 00:29:50.556 "uuid": "fcdd885f-40e5-5a6c-80fe-96aa82a2b0e3", 00:29:50.556 "is_configured": true, 00:29:50.556 "data_offset": 2048, 00:29:50.556 "data_size": 63488 00:29:50.556 }, 00:29:50.556 { 00:29:50.556 "name": "BaseBdev2", 00:29:50.556 "uuid": "0c381f88-1a28-5d7d-b81e-02cb05562592", 00:29:50.556 "is_configured": true, 00:29:50.556 "data_offset": 2048, 00:29:50.556 "data_size": 63488 00:29:50.556 }, 00:29:50.556 { 00:29:50.556 "name": "BaseBdev3", 00:29:50.556 "uuid": "504b56dc-d87d-542d-bd56-3610a5669340", 00:29:50.556 "is_configured": true, 00:29:50.556 "data_offset": 2048, 00:29:50.556 "data_size": 63488 00:29:50.556 }, 00:29:50.556 { 00:29:50.556 "name": "BaseBdev4", 00:29:50.556 "uuid": "4e95e332-2108-5ea9-bb8f-52eccee4dffd", 00:29:50.556 "is_configured": true, 00:29:50.556 "data_offset": 2048, 00:29:50.556 "data_size": 63488 00:29:50.556 } 00:29:50.556 ] 00:29:50.556 }' 00:29:50.556 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:50.556 18:28:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:50.815 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:29:50.815 18:28:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:29:51.074 [2024-12-06 18:28:21.775210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:52.014 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:52.015 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:29:52.015 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:52.015 18:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.015 18:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.015 18:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.015 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:52.015 "name": "raid_bdev1", 00:29:52.015 "uuid": "08abecb4-09c7-4fc1-bd71-37c79836b2bf", 00:29:52.015 "strip_size_kb": 64, 00:29:52.015 "state": "online", 00:29:52.015 "raid_level": "raid0", 00:29:52.015 "superblock": true, 00:29:52.015 "num_base_bdevs": 4, 00:29:52.015 "num_base_bdevs_discovered": 4, 00:29:52.015 "num_base_bdevs_operational": 4, 00:29:52.015 "base_bdevs_list": [ 00:29:52.015 { 00:29:52.015 "name": "BaseBdev1", 00:29:52.015 "uuid": "fcdd885f-40e5-5a6c-80fe-96aa82a2b0e3", 00:29:52.015 "is_configured": true, 00:29:52.015 "data_offset": 2048, 00:29:52.015 "data_size": 63488 00:29:52.015 }, 00:29:52.015 { 00:29:52.015 "name": "BaseBdev2", 00:29:52.015 "uuid": "0c381f88-1a28-5d7d-b81e-02cb05562592", 00:29:52.015 "is_configured": true, 00:29:52.015 "data_offset": 2048, 00:29:52.015 "data_size": 63488 00:29:52.015 }, 00:29:52.015 { 00:29:52.015 "name": "BaseBdev3", 00:29:52.015 "uuid": "504b56dc-d87d-542d-bd56-3610a5669340", 00:29:52.015 "is_configured": true, 00:29:52.015 "data_offset": 2048, 00:29:52.015 "data_size": 63488 00:29:52.015 }, 00:29:52.015 { 00:29:52.015 "name": "BaseBdev4", 00:29:52.015 "uuid": "4e95e332-2108-5ea9-bb8f-52eccee4dffd", 00:29:52.015 "is_configured": true, 00:29:52.015 "data_offset": 2048, 00:29:52.015 "data_size": 63488 00:29:52.015 } 00:29:52.015 ] 00:29:52.015 }' 00:29:52.015 18:28:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:52.015 18:28:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:52.274 [2024-12-06 18:28:23.114297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:29:52.274 [2024-12-06 18:28:23.114337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:52.274 [2024-12-06 18:28:23.116999] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:52.274 [2024-12-06 18:28:23.117065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:52.274 [2024-12-06 18:28:23.117111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:29:52.274 [2024-12-06 18:28:23.117126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:29:52.274 { 00:29:52.274 "results": [ 00:29:52.274 { 00:29:52.274 "job": "raid_bdev1", 00:29:52.274 "core_mask": "0x1", 00:29:52.274 "workload": "randrw", 00:29:52.274 "percentage": 50, 00:29:52.274 "status": "finished", 00:29:52.274 "queue_depth": 1, 00:29:52.274 "io_size": 131072, 00:29:52.274 "runtime": 1.339162, 00:29:52.274 "iops": 15711.317973478937, 00:29:52.274 "mibps": 1963.9147466848672, 00:29:52.274 "io_failed": 1, 00:29:52.274 "io_timeout": 0, 00:29:52.274 "avg_latency_us": 87.98850788353738, 00:29:52.274 "min_latency_us": 27.553413654618474, 00:29:52.274 "max_latency_us": 1434.4224899598394 00:29:52.274 } 00:29:52.274 ], 00:29:52.274 "core_count": 1 00:29:52.274 } 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70861 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70861 ']' 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70861 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:52.274 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70861 00:29:52.275 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:52.275 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:52.275 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70861' 00:29:52.275 killing process with pid 70861 00:29:52.275 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70861 00:29:52.275 [2024-12-06 18:28:23.151065] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:29:52.275 18:28:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70861 00:29:52.535 [2024-12-06 18:28:23.481531] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MaPmzOjNGF 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:29:53.919 00:29:53.919 real 0m4.706s 00:29:53.919 user 0m5.487s 00:29:53.919 sys 0m0.657s 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:53.919 18:28:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:29:53.919 ************************************ 00:29:53.919 END TEST raid_write_error_test 00:29:53.919 ************************************ 00:29:53.919 18:28:24 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:29:53.919 18:28:24 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:29:53.919 18:28:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:29:53.919 18:28:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:53.919 18:28:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:29:53.919 ************************************ 00:29:53.919 START TEST raid_state_function_test 00:29:53.919 ************************************ 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=70999 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:29:53.919 Process raid pid: 70999 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70999' 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 70999 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 70999 ']' 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.919 18:28:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.179 [2024-12-06 18:28:24.885158] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:29:54.179 [2024-12-06 18:28:24.885273] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.179 [2024-12-06 18:28:25.066383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.438 [2024-12-06 18:28:25.181563] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.696 [2024-12-06 18:28:25.395443] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:54.696 [2024-12-06 18:28:25.395494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.955 [2024-12-06 18:28:25.824242] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:54.955 [2024-12-06 18:28:25.824311] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:54.955 [2024-12-06 18:28:25.824322] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:54.955 [2024-12-06 18:28:25.824336] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:54.955 [2024-12-06 18:28:25.824344] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:54.955 [2024-12-06 18:28:25.824356] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:54.955 [2024-12-06 18:28:25.824364] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:54.955 [2024-12-06 18:28:25.824376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:54.955 "name": "Existed_Raid", 00:29:54.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.955 "strip_size_kb": 64, 00:29:54.955 "state": "configuring", 00:29:54.955 "raid_level": "concat", 00:29:54.955 "superblock": false, 00:29:54.955 "num_base_bdevs": 4, 00:29:54.955 "num_base_bdevs_discovered": 0, 00:29:54.955 "num_base_bdevs_operational": 4, 00:29:54.955 "base_bdevs_list": [ 00:29:54.955 { 00:29:54.955 "name": "BaseBdev1", 00:29:54.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.955 "is_configured": false, 00:29:54.955 "data_offset": 0, 00:29:54.955 "data_size": 0 00:29:54.955 }, 00:29:54.955 { 00:29:54.955 "name": "BaseBdev2", 00:29:54.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.955 "is_configured": false, 00:29:54.955 "data_offset": 0, 00:29:54.955 "data_size": 0 00:29:54.955 }, 00:29:54.955 { 00:29:54.955 "name": "BaseBdev3", 00:29:54.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.955 "is_configured": false, 00:29:54.955 "data_offset": 0, 00:29:54.955 "data_size": 0 00:29:54.955 }, 00:29:54.955 { 00:29:54.955 "name": "BaseBdev4", 00:29:54.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:54.955 "is_configured": false, 00:29:54.955 "data_offset": 0, 00:29:54.955 "data_size": 0 00:29:54.955 } 00:29:54.955 ] 00:29:54.955 }' 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:54.955 18:28:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.523 [2024-12-06 18:28:26.283743] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:55.523 [2024-12-06 18:28:26.283790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.523 [2024-12-06 18:28:26.295747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:55.523 [2024-12-06 18:28:26.295804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:55.523 [2024-12-06 18:28:26.295815] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:55.523 [2024-12-06 18:28:26.295827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:55.523 [2024-12-06 18:28:26.295835] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:55.523 [2024-12-06 18:28:26.295847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:55.523 [2024-12-06 18:28:26.295855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:55.523 [2024-12-06 18:28:26.295868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.523 [2024-12-06 18:28:26.348460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:55.523 BaseBdev1 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.523 [ 00:29:55.523 { 00:29:55.523 "name": "BaseBdev1", 00:29:55.523 "aliases": [ 00:29:55.523 "08a702a9-63db-4689-bd3d-07dcdd679499" 00:29:55.523 ], 00:29:55.523 "product_name": "Malloc disk", 00:29:55.523 "block_size": 512, 00:29:55.523 "num_blocks": 65536, 00:29:55.523 "uuid": "08a702a9-63db-4689-bd3d-07dcdd679499", 00:29:55.523 "assigned_rate_limits": { 00:29:55.523 "rw_ios_per_sec": 0, 00:29:55.523 "rw_mbytes_per_sec": 0, 00:29:55.523 "r_mbytes_per_sec": 0, 00:29:55.523 "w_mbytes_per_sec": 0 00:29:55.523 }, 00:29:55.523 "claimed": true, 00:29:55.523 "claim_type": "exclusive_write", 00:29:55.523 "zoned": false, 00:29:55.523 "supported_io_types": { 00:29:55.523 "read": true, 00:29:55.523 "write": true, 00:29:55.523 "unmap": true, 00:29:55.523 "flush": true, 00:29:55.523 "reset": true, 00:29:55.523 "nvme_admin": false, 00:29:55.523 "nvme_io": false, 00:29:55.523 "nvme_io_md": false, 00:29:55.523 "write_zeroes": true, 00:29:55.523 "zcopy": true, 00:29:55.523 "get_zone_info": false, 00:29:55.523 "zone_management": false, 00:29:55.523 "zone_append": false, 00:29:55.523 "compare": false, 00:29:55.523 "compare_and_write": false, 00:29:55.523 "abort": true, 00:29:55.523 "seek_hole": false, 00:29:55.523 "seek_data": false, 00:29:55.523 "copy": true, 00:29:55.523 "nvme_iov_md": false 00:29:55.523 }, 00:29:55.523 "memory_domains": [ 00:29:55.523 { 00:29:55.523 "dma_device_id": "system", 00:29:55.523 "dma_device_type": 1 00:29:55.523 }, 00:29:55.523 { 00:29:55.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:55.523 "dma_device_type": 2 00:29:55.523 } 00:29:55.523 ], 00:29:55.523 "driver_specific": {} 00:29:55.523 } 00:29:55.523 ] 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:55.523 "name": "Existed_Raid", 00:29:55.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.523 "strip_size_kb": 64, 00:29:55.523 "state": "configuring", 00:29:55.523 "raid_level": "concat", 00:29:55.523 "superblock": false, 00:29:55.523 "num_base_bdevs": 4, 00:29:55.523 "num_base_bdevs_discovered": 1, 00:29:55.523 "num_base_bdevs_operational": 4, 00:29:55.523 "base_bdevs_list": [ 00:29:55.523 { 00:29:55.523 "name": "BaseBdev1", 00:29:55.523 "uuid": "08a702a9-63db-4689-bd3d-07dcdd679499", 00:29:55.523 "is_configured": true, 00:29:55.523 "data_offset": 0, 00:29:55.523 "data_size": 65536 00:29:55.523 }, 00:29:55.523 { 00:29:55.523 "name": "BaseBdev2", 00:29:55.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.523 "is_configured": false, 00:29:55.523 "data_offset": 0, 00:29:55.523 "data_size": 0 00:29:55.523 }, 00:29:55.523 { 00:29:55.523 "name": "BaseBdev3", 00:29:55.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.523 "is_configured": false, 00:29:55.523 "data_offset": 0, 00:29:55.523 "data_size": 0 00:29:55.523 }, 00:29:55.523 { 00:29:55.523 "name": "BaseBdev4", 00:29:55.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:55.523 "is_configured": false, 00:29:55.523 "data_offset": 0, 00:29:55.523 "data_size": 0 00:29:55.523 } 00:29:55.523 ] 00:29:55.523 }' 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:55.523 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.090 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.091 [2024-12-06 18:28:26.851885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:29:56.091 [2024-12-06 18:28:26.851954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.091 [2024-12-06 18:28:26.863918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:29:56.091 [2024-12-06 18:28:26.866094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:29:56.091 [2024-12-06 18:28:26.866156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:29:56.091 [2024-12-06 18:28:26.866169] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:29:56.091 [2024-12-06 18:28:26.866200] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:29:56.091 [2024-12-06 18:28:26.866209] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:29:56.091 [2024-12-06 18:28:26.866222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:56.091 "name": "Existed_Raid", 00:29:56.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.091 "strip_size_kb": 64, 00:29:56.091 "state": "configuring", 00:29:56.091 "raid_level": "concat", 00:29:56.091 "superblock": false, 00:29:56.091 "num_base_bdevs": 4, 00:29:56.091 "num_base_bdevs_discovered": 1, 00:29:56.091 "num_base_bdevs_operational": 4, 00:29:56.091 "base_bdevs_list": [ 00:29:56.091 { 00:29:56.091 "name": "BaseBdev1", 00:29:56.091 "uuid": "08a702a9-63db-4689-bd3d-07dcdd679499", 00:29:56.091 "is_configured": true, 00:29:56.091 "data_offset": 0, 00:29:56.091 "data_size": 65536 00:29:56.091 }, 00:29:56.091 { 00:29:56.091 "name": "BaseBdev2", 00:29:56.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.091 "is_configured": false, 00:29:56.091 "data_offset": 0, 00:29:56.091 "data_size": 0 00:29:56.091 }, 00:29:56.091 { 00:29:56.091 "name": "BaseBdev3", 00:29:56.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.091 "is_configured": false, 00:29:56.091 "data_offset": 0, 00:29:56.091 "data_size": 0 00:29:56.091 }, 00:29:56.091 { 00:29:56.091 "name": "BaseBdev4", 00:29:56.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.091 "is_configured": false, 00:29:56.091 "data_offset": 0, 00:29:56.091 "data_size": 0 00:29:56.091 } 00:29:56.091 ] 00:29:56.091 }' 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:56.091 18:28:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.658 [2024-12-06 18:28:27.372106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:56.658 BaseBdev2 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.658 [ 00:29:56.658 { 00:29:56.658 "name": "BaseBdev2", 00:29:56.658 "aliases": [ 00:29:56.658 "97f14dda-41c2-4524-bdac-04c9323eca5a" 00:29:56.658 ], 00:29:56.658 "product_name": "Malloc disk", 00:29:56.658 "block_size": 512, 00:29:56.658 "num_blocks": 65536, 00:29:56.658 "uuid": "97f14dda-41c2-4524-bdac-04c9323eca5a", 00:29:56.658 "assigned_rate_limits": { 00:29:56.658 "rw_ios_per_sec": 0, 00:29:56.658 "rw_mbytes_per_sec": 0, 00:29:56.658 "r_mbytes_per_sec": 0, 00:29:56.658 "w_mbytes_per_sec": 0 00:29:56.658 }, 00:29:56.658 "claimed": true, 00:29:56.658 "claim_type": "exclusive_write", 00:29:56.658 "zoned": false, 00:29:56.658 "supported_io_types": { 00:29:56.658 "read": true, 00:29:56.658 "write": true, 00:29:56.658 "unmap": true, 00:29:56.658 "flush": true, 00:29:56.658 "reset": true, 00:29:56.658 "nvme_admin": false, 00:29:56.658 "nvme_io": false, 00:29:56.658 "nvme_io_md": false, 00:29:56.658 "write_zeroes": true, 00:29:56.658 "zcopy": true, 00:29:56.658 "get_zone_info": false, 00:29:56.658 "zone_management": false, 00:29:56.658 "zone_append": false, 00:29:56.658 "compare": false, 00:29:56.658 "compare_and_write": false, 00:29:56.658 "abort": true, 00:29:56.658 "seek_hole": false, 00:29:56.658 "seek_data": false, 00:29:56.658 "copy": true, 00:29:56.658 "nvme_iov_md": false 00:29:56.658 }, 00:29:56.658 "memory_domains": [ 00:29:56.658 { 00:29:56.658 "dma_device_id": "system", 00:29:56.658 "dma_device_type": 1 00:29:56.658 }, 00:29:56.658 { 00:29:56.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:56.658 "dma_device_type": 2 00:29:56.658 } 00:29:56.658 ], 00:29:56.658 "driver_specific": {} 00:29:56.658 } 00:29:56.658 ] 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:56.658 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:56.659 "name": "Existed_Raid", 00:29:56.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.659 "strip_size_kb": 64, 00:29:56.659 "state": "configuring", 00:29:56.659 "raid_level": "concat", 00:29:56.659 "superblock": false, 00:29:56.659 "num_base_bdevs": 4, 00:29:56.659 "num_base_bdevs_discovered": 2, 00:29:56.659 "num_base_bdevs_operational": 4, 00:29:56.659 "base_bdevs_list": [ 00:29:56.659 { 00:29:56.659 "name": "BaseBdev1", 00:29:56.659 "uuid": "08a702a9-63db-4689-bd3d-07dcdd679499", 00:29:56.659 "is_configured": true, 00:29:56.659 "data_offset": 0, 00:29:56.659 "data_size": 65536 00:29:56.659 }, 00:29:56.659 { 00:29:56.659 "name": "BaseBdev2", 00:29:56.659 "uuid": "97f14dda-41c2-4524-bdac-04c9323eca5a", 00:29:56.659 "is_configured": true, 00:29:56.659 "data_offset": 0, 00:29:56.659 "data_size": 65536 00:29:56.659 }, 00:29:56.659 { 00:29:56.659 "name": "BaseBdev3", 00:29:56.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.659 "is_configured": false, 00:29:56.659 "data_offset": 0, 00:29:56.659 "data_size": 0 00:29:56.659 }, 00:29:56.659 { 00:29:56.659 "name": "BaseBdev4", 00:29:56.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:56.659 "is_configured": false, 00:29:56.659 "data_offset": 0, 00:29:56.659 "data_size": 0 00:29:56.659 } 00:29:56.659 ] 00:29:56.659 }' 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:56.659 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:56.918 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:56.918 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.918 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.177 [2024-12-06 18:28:27.899710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:57.177 BaseBdev3 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.177 [ 00:29:57.177 { 00:29:57.177 "name": "BaseBdev3", 00:29:57.177 "aliases": [ 00:29:57.177 "5c8f0530-5d1b-4629-a601-cfd6202f2099" 00:29:57.177 ], 00:29:57.177 "product_name": "Malloc disk", 00:29:57.177 "block_size": 512, 00:29:57.177 "num_blocks": 65536, 00:29:57.177 "uuid": "5c8f0530-5d1b-4629-a601-cfd6202f2099", 00:29:57.177 "assigned_rate_limits": { 00:29:57.177 "rw_ios_per_sec": 0, 00:29:57.177 "rw_mbytes_per_sec": 0, 00:29:57.177 "r_mbytes_per_sec": 0, 00:29:57.177 "w_mbytes_per_sec": 0 00:29:57.177 }, 00:29:57.177 "claimed": true, 00:29:57.177 "claim_type": "exclusive_write", 00:29:57.177 "zoned": false, 00:29:57.177 "supported_io_types": { 00:29:57.177 "read": true, 00:29:57.177 "write": true, 00:29:57.177 "unmap": true, 00:29:57.177 "flush": true, 00:29:57.177 "reset": true, 00:29:57.177 "nvme_admin": false, 00:29:57.177 "nvme_io": false, 00:29:57.177 "nvme_io_md": false, 00:29:57.177 "write_zeroes": true, 00:29:57.177 "zcopy": true, 00:29:57.177 "get_zone_info": false, 00:29:57.177 "zone_management": false, 00:29:57.177 "zone_append": false, 00:29:57.177 "compare": false, 00:29:57.177 "compare_and_write": false, 00:29:57.177 "abort": true, 00:29:57.177 "seek_hole": false, 00:29:57.177 "seek_data": false, 00:29:57.177 "copy": true, 00:29:57.177 "nvme_iov_md": false 00:29:57.177 }, 00:29:57.177 "memory_domains": [ 00:29:57.177 { 00:29:57.177 "dma_device_id": "system", 00:29:57.177 "dma_device_type": 1 00:29:57.177 }, 00:29:57.177 { 00:29:57.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:57.177 "dma_device_type": 2 00:29:57.177 } 00:29:57.177 ], 00:29:57.177 "driver_specific": {} 00:29:57.177 } 00:29:57.177 ] 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.177 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:57.177 "name": "Existed_Raid", 00:29:57.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.177 "strip_size_kb": 64, 00:29:57.177 "state": "configuring", 00:29:57.177 "raid_level": "concat", 00:29:57.177 "superblock": false, 00:29:57.177 "num_base_bdevs": 4, 00:29:57.177 "num_base_bdevs_discovered": 3, 00:29:57.177 "num_base_bdevs_operational": 4, 00:29:57.177 "base_bdevs_list": [ 00:29:57.177 { 00:29:57.177 "name": "BaseBdev1", 00:29:57.177 "uuid": "08a702a9-63db-4689-bd3d-07dcdd679499", 00:29:57.177 "is_configured": true, 00:29:57.177 "data_offset": 0, 00:29:57.178 "data_size": 65536 00:29:57.178 }, 00:29:57.178 { 00:29:57.178 "name": "BaseBdev2", 00:29:57.178 "uuid": "97f14dda-41c2-4524-bdac-04c9323eca5a", 00:29:57.178 "is_configured": true, 00:29:57.178 "data_offset": 0, 00:29:57.178 "data_size": 65536 00:29:57.178 }, 00:29:57.178 { 00:29:57.178 "name": "BaseBdev3", 00:29:57.178 "uuid": "5c8f0530-5d1b-4629-a601-cfd6202f2099", 00:29:57.178 "is_configured": true, 00:29:57.178 "data_offset": 0, 00:29:57.178 "data_size": 65536 00:29:57.178 }, 00:29:57.178 { 00:29:57.178 "name": "BaseBdev4", 00:29:57.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:57.178 "is_configured": false, 00:29:57.178 "data_offset": 0, 00:29:57.178 "data_size": 0 00:29:57.178 } 00:29:57.178 ] 00:29:57.178 }' 00:29:57.178 18:28:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:57.178 18:28:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.437 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:29:57.437 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.437 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.696 [2024-12-06 18:28:28.392387] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:57.696 [2024-12-06 18:28:28.392444] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:29:57.696 [2024-12-06 18:28:28.392454] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:29:57.696 [2024-12-06 18:28:28.392784] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:29:57.696 [2024-12-06 18:28:28.392941] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:29:57.696 [2024-12-06 18:28:28.392964] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:29:57.696 [2024-12-06 18:28:28.393246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:29:57.696 BaseBdev4 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.696 [ 00:29:57.696 { 00:29:57.696 "name": "BaseBdev4", 00:29:57.696 "aliases": [ 00:29:57.696 "b61ff80c-1029-4a1c-8628-4c0b2c5253db" 00:29:57.696 ], 00:29:57.696 "product_name": "Malloc disk", 00:29:57.696 "block_size": 512, 00:29:57.696 "num_blocks": 65536, 00:29:57.696 "uuid": "b61ff80c-1029-4a1c-8628-4c0b2c5253db", 00:29:57.696 "assigned_rate_limits": { 00:29:57.696 "rw_ios_per_sec": 0, 00:29:57.696 "rw_mbytes_per_sec": 0, 00:29:57.696 "r_mbytes_per_sec": 0, 00:29:57.696 "w_mbytes_per_sec": 0 00:29:57.696 }, 00:29:57.696 "claimed": true, 00:29:57.696 "claim_type": "exclusive_write", 00:29:57.696 "zoned": false, 00:29:57.696 "supported_io_types": { 00:29:57.696 "read": true, 00:29:57.696 "write": true, 00:29:57.696 "unmap": true, 00:29:57.696 "flush": true, 00:29:57.696 "reset": true, 00:29:57.696 "nvme_admin": false, 00:29:57.696 "nvme_io": false, 00:29:57.696 "nvme_io_md": false, 00:29:57.696 "write_zeroes": true, 00:29:57.696 "zcopy": true, 00:29:57.696 "get_zone_info": false, 00:29:57.696 "zone_management": false, 00:29:57.696 "zone_append": false, 00:29:57.696 "compare": false, 00:29:57.696 "compare_and_write": false, 00:29:57.696 "abort": true, 00:29:57.696 "seek_hole": false, 00:29:57.696 "seek_data": false, 00:29:57.696 "copy": true, 00:29:57.696 "nvme_iov_md": false 00:29:57.696 }, 00:29:57.696 "memory_domains": [ 00:29:57.696 { 00:29:57.696 "dma_device_id": "system", 00:29:57.696 "dma_device_type": 1 00:29:57.696 }, 00:29:57.696 { 00:29:57.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:57.696 "dma_device_type": 2 00:29:57.696 } 00:29:57.696 ], 00:29:57.696 "driver_specific": {} 00:29:57.696 } 00:29:57.696 ] 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.696 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:57.696 "name": "Existed_Raid", 00:29:57.696 "uuid": "0f7d9f7d-76e0-47ed-8232-397b2792fb91", 00:29:57.696 "strip_size_kb": 64, 00:29:57.696 "state": "online", 00:29:57.696 "raid_level": "concat", 00:29:57.696 "superblock": false, 00:29:57.696 "num_base_bdevs": 4, 00:29:57.696 "num_base_bdevs_discovered": 4, 00:29:57.696 "num_base_bdevs_operational": 4, 00:29:57.696 "base_bdevs_list": [ 00:29:57.696 { 00:29:57.696 "name": "BaseBdev1", 00:29:57.697 "uuid": "08a702a9-63db-4689-bd3d-07dcdd679499", 00:29:57.697 "is_configured": true, 00:29:57.697 "data_offset": 0, 00:29:57.697 "data_size": 65536 00:29:57.697 }, 00:29:57.697 { 00:29:57.697 "name": "BaseBdev2", 00:29:57.697 "uuid": "97f14dda-41c2-4524-bdac-04c9323eca5a", 00:29:57.697 "is_configured": true, 00:29:57.697 "data_offset": 0, 00:29:57.697 "data_size": 65536 00:29:57.697 }, 00:29:57.697 { 00:29:57.697 "name": "BaseBdev3", 00:29:57.697 "uuid": "5c8f0530-5d1b-4629-a601-cfd6202f2099", 00:29:57.697 "is_configured": true, 00:29:57.697 "data_offset": 0, 00:29:57.697 "data_size": 65536 00:29:57.697 }, 00:29:57.697 { 00:29:57.697 "name": "BaseBdev4", 00:29:57.697 "uuid": "b61ff80c-1029-4a1c-8628-4c0b2c5253db", 00:29:57.697 "is_configured": true, 00:29:57.697 "data_offset": 0, 00:29:57.697 "data_size": 65536 00:29:57.697 } 00:29:57.697 ] 00:29:57.697 }' 00:29:57.697 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:57.697 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:57.957 [2024-12-06 18:28:28.864139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.957 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:29:57.957 "name": "Existed_Raid", 00:29:57.957 "aliases": [ 00:29:57.957 "0f7d9f7d-76e0-47ed-8232-397b2792fb91" 00:29:57.957 ], 00:29:57.957 "product_name": "Raid Volume", 00:29:57.957 "block_size": 512, 00:29:57.957 "num_blocks": 262144, 00:29:57.957 "uuid": "0f7d9f7d-76e0-47ed-8232-397b2792fb91", 00:29:57.957 "assigned_rate_limits": { 00:29:57.957 "rw_ios_per_sec": 0, 00:29:57.957 "rw_mbytes_per_sec": 0, 00:29:57.957 "r_mbytes_per_sec": 0, 00:29:57.957 "w_mbytes_per_sec": 0 00:29:57.957 }, 00:29:57.957 "claimed": false, 00:29:57.957 "zoned": false, 00:29:57.957 "supported_io_types": { 00:29:57.957 "read": true, 00:29:57.957 "write": true, 00:29:57.957 "unmap": true, 00:29:57.957 "flush": true, 00:29:57.957 "reset": true, 00:29:57.957 "nvme_admin": false, 00:29:57.957 "nvme_io": false, 00:29:57.957 "nvme_io_md": false, 00:29:57.957 "write_zeroes": true, 00:29:57.957 "zcopy": false, 00:29:57.957 "get_zone_info": false, 00:29:57.957 "zone_management": false, 00:29:57.957 "zone_append": false, 00:29:57.957 "compare": false, 00:29:57.957 "compare_and_write": false, 00:29:57.957 "abort": false, 00:29:57.957 "seek_hole": false, 00:29:57.957 "seek_data": false, 00:29:57.957 "copy": false, 00:29:57.957 "nvme_iov_md": false 00:29:57.957 }, 00:29:57.957 "memory_domains": [ 00:29:57.957 { 00:29:57.957 "dma_device_id": "system", 00:29:57.957 "dma_device_type": 1 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:57.957 "dma_device_type": 2 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "dma_device_id": "system", 00:29:57.957 "dma_device_type": 1 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:57.957 "dma_device_type": 2 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "dma_device_id": "system", 00:29:57.957 "dma_device_type": 1 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:57.957 "dma_device_type": 2 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "dma_device_id": "system", 00:29:57.957 "dma_device_type": 1 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:57.957 "dma_device_type": 2 00:29:57.957 } 00:29:57.957 ], 00:29:57.957 "driver_specific": { 00:29:57.957 "raid": { 00:29:57.957 "uuid": "0f7d9f7d-76e0-47ed-8232-397b2792fb91", 00:29:57.957 "strip_size_kb": 64, 00:29:57.957 "state": "online", 00:29:57.957 "raid_level": "concat", 00:29:57.957 "superblock": false, 00:29:57.957 "num_base_bdevs": 4, 00:29:57.957 "num_base_bdevs_discovered": 4, 00:29:57.957 "num_base_bdevs_operational": 4, 00:29:57.957 "base_bdevs_list": [ 00:29:57.957 { 00:29:57.957 "name": "BaseBdev1", 00:29:57.957 "uuid": "08a702a9-63db-4689-bd3d-07dcdd679499", 00:29:57.957 "is_configured": true, 00:29:57.957 "data_offset": 0, 00:29:57.957 "data_size": 65536 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "name": "BaseBdev2", 00:29:57.957 "uuid": "97f14dda-41c2-4524-bdac-04c9323eca5a", 00:29:57.957 "is_configured": true, 00:29:57.957 "data_offset": 0, 00:29:57.957 "data_size": 65536 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "name": "BaseBdev3", 00:29:57.957 "uuid": "5c8f0530-5d1b-4629-a601-cfd6202f2099", 00:29:57.957 "is_configured": true, 00:29:57.957 "data_offset": 0, 00:29:57.957 "data_size": 65536 00:29:57.957 }, 00:29:57.957 { 00:29:57.957 "name": "BaseBdev4", 00:29:57.957 "uuid": "b61ff80c-1029-4a1c-8628-4c0b2c5253db", 00:29:57.957 "is_configured": true, 00:29:57.957 "data_offset": 0, 00:29:57.957 "data_size": 65536 00:29:57.957 } 00:29:57.957 ] 00:29:57.957 } 00:29:57.957 } 00:29:57.957 }' 00:29:58.216 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:29:58.216 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:29:58.216 BaseBdev2 00:29:58.216 BaseBdev3 00:29:58.216 BaseBdev4' 00:29:58.216 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:58.216 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:29:58.216 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:58.217 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:29:58.217 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.217 18:28:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.217 18:28:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:29:58.217 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.517 [2024-12-06 18:28:29.187405] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:29:58.517 [2024-12-06 18:28:29.187445] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:29:58.517 [2024-12-06 18:28:29.187498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:58.517 "name": "Existed_Raid", 00:29:58.517 "uuid": "0f7d9f7d-76e0-47ed-8232-397b2792fb91", 00:29:58.517 "strip_size_kb": 64, 00:29:58.517 "state": "offline", 00:29:58.517 "raid_level": "concat", 00:29:58.517 "superblock": false, 00:29:58.517 "num_base_bdevs": 4, 00:29:58.517 "num_base_bdevs_discovered": 3, 00:29:58.517 "num_base_bdevs_operational": 3, 00:29:58.517 "base_bdevs_list": [ 00:29:58.517 { 00:29:58.517 "name": null, 00:29:58.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:58.517 "is_configured": false, 00:29:58.517 "data_offset": 0, 00:29:58.517 "data_size": 65536 00:29:58.517 }, 00:29:58.517 { 00:29:58.517 "name": "BaseBdev2", 00:29:58.517 "uuid": "97f14dda-41c2-4524-bdac-04c9323eca5a", 00:29:58.517 "is_configured": true, 00:29:58.517 "data_offset": 0, 00:29:58.517 "data_size": 65536 00:29:58.517 }, 00:29:58.517 { 00:29:58.517 "name": "BaseBdev3", 00:29:58.517 "uuid": "5c8f0530-5d1b-4629-a601-cfd6202f2099", 00:29:58.517 "is_configured": true, 00:29:58.517 "data_offset": 0, 00:29:58.517 "data_size": 65536 00:29:58.517 }, 00:29:58.517 { 00:29:58.517 "name": "BaseBdev4", 00:29:58.517 "uuid": "b61ff80c-1029-4a1c-8628-4c0b2c5253db", 00:29:58.517 "is_configured": true, 00:29:58.517 "data_offset": 0, 00:29:58.517 "data_size": 65536 00:29:58.517 } 00:29:58.517 ] 00:29:58.517 }' 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:58.517 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:58.796 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:58.796 [2024-12-06 18:28:29.743324] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.055 [2024-12-06 18:28:29.896042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.055 18:28:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:29:59.055 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.055 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.314 [2024-12-06 18:28:30.050407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:29:59.314 [2024-12-06 18:28:30.050468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.314 BaseBdev2 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.314 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.574 [ 00:29:59.574 { 00:29:59.574 "name": "BaseBdev2", 00:29:59.574 "aliases": [ 00:29:59.574 "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd" 00:29:59.574 ], 00:29:59.574 "product_name": "Malloc disk", 00:29:59.574 "block_size": 512, 00:29:59.574 "num_blocks": 65536, 00:29:59.574 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:29:59.574 "assigned_rate_limits": { 00:29:59.574 "rw_ios_per_sec": 0, 00:29:59.574 "rw_mbytes_per_sec": 0, 00:29:59.574 "r_mbytes_per_sec": 0, 00:29:59.574 "w_mbytes_per_sec": 0 00:29:59.574 }, 00:29:59.574 "claimed": false, 00:29:59.574 "zoned": false, 00:29:59.574 "supported_io_types": { 00:29:59.574 "read": true, 00:29:59.574 "write": true, 00:29:59.574 "unmap": true, 00:29:59.574 "flush": true, 00:29:59.574 "reset": true, 00:29:59.574 "nvme_admin": false, 00:29:59.574 "nvme_io": false, 00:29:59.574 "nvme_io_md": false, 00:29:59.574 "write_zeroes": true, 00:29:59.574 "zcopy": true, 00:29:59.574 "get_zone_info": false, 00:29:59.574 "zone_management": false, 00:29:59.574 "zone_append": false, 00:29:59.574 "compare": false, 00:29:59.574 "compare_and_write": false, 00:29:59.574 "abort": true, 00:29:59.574 "seek_hole": false, 00:29:59.574 "seek_data": false, 00:29:59.574 "copy": true, 00:29:59.574 "nvme_iov_md": false 00:29:59.574 }, 00:29:59.574 "memory_domains": [ 00:29:59.574 { 00:29:59.574 "dma_device_id": "system", 00:29:59.574 "dma_device_type": 1 00:29:59.574 }, 00:29:59.574 { 00:29:59.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:59.574 "dma_device_type": 2 00:29:59.574 } 00:29:59.574 ], 00:29:59.574 "driver_specific": {} 00:29:59.574 } 00:29:59.574 ] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.574 BaseBdev3 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.574 [ 00:29:59.574 { 00:29:59.574 "name": "BaseBdev3", 00:29:59.574 "aliases": [ 00:29:59.574 "12559475-faf8-4be6-a299-dc564d594e64" 00:29:59.574 ], 00:29:59.574 "product_name": "Malloc disk", 00:29:59.574 "block_size": 512, 00:29:59.574 "num_blocks": 65536, 00:29:59.574 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:29:59.574 "assigned_rate_limits": { 00:29:59.574 "rw_ios_per_sec": 0, 00:29:59.574 "rw_mbytes_per_sec": 0, 00:29:59.574 "r_mbytes_per_sec": 0, 00:29:59.574 "w_mbytes_per_sec": 0 00:29:59.574 }, 00:29:59.574 "claimed": false, 00:29:59.574 "zoned": false, 00:29:59.574 "supported_io_types": { 00:29:59.574 "read": true, 00:29:59.574 "write": true, 00:29:59.574 "unmap": true, 00:29:59.574 "flush": true, 00:29:59.574 "reset": true, 00:29:59.574 "nvme_admin": false, 00:29:59.574 "nvme_io": false, 00:29:59.574 "nvme_io_md": false, 00:29:59.574 "write_zeroes": true, 00:29:59.574 "zcopy": true, 00:29:59.574 "get_zone_info": false, 00:29:59.574 "zone_management": false, 00:29:59.574 "zone_append": false, 00:29:59.574 "compare": false, 00:29:59.574 "compare_and_write": false, 00:29:59.574 "abort": true, 00:29:59.574 "seek_hole": false, 00:29:59.574 "seek_data": false, 00:29:59.574 "copy": true, 00:29:59.574 "nvme_iov_md": false 00:29:59.574 }, 00:29:59.574 "memory_domains": [ 00:29:59.574 { 00:29:59.574 "dma_device_id": "system", 00:29:59.574 "dma_device_type": 1 00:29:59.574 }, 00:29:59.574 { 00:29:59.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:59.574 "dma_device_type": 2 00:29:59.574 } 00:29:59.574 ], 00:29:59.574 "driver_specific": {} 00:29:59.574 } 00:29:59.574 ] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.574 BaseBdev4 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:59.574 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.575 [ 00:29:59.575 { 00:29:59.575 "name": "BaseBdev4", 00:29:59.575 "aliases": [ 00:29:59.575 "85fcf9c8-c718-4e61-a955-79d210445712" 00:29:59.575 ], 00:29:59.575 "product_name": "Malloc disk", 00:29:59.575 "block_size": 512, 00:29:59.575 "num_blocks": 65536, 00:29:59.575 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:29:59.575 "assigned_rate_limits": { 00:29:59.575 "rw_ios_per_sec": 0, 00:29:59.575 "rw_mbytes_per_sec": 0, 00:29:59.575 "r_mbytes_per_sec": 0, 00:29:59.575 "w_mbytes_per_sec": 0 00:29:59.575 }, 00:29:59.575 "claimed": false, 00:29:59.575 "zoned": false, 00:29:59.575 "supported_io_types": { 00:29:59.575 "read": true, 00:29:59.575 "write": true, 00:29:59.575 "unmap": true, 00:29:59.575 "flush": true, 00:29:59.575 "reset": true, 00:29:59.575 "nvme_admin": false, 00:29:59.575 "nvme_io": false, 00:29:59.575 "nvme_io_md": false, 00:29:59.575 "write_zeroes": true, 00:29:59.575 "zcopy": true, 00:29:59.575 "get_zone_info": false, 00:29:59.575 "zone_management": false, 00:29:59.575 "zone_append": false, 00:29:59.575 "compare": false, 00:29:59.575 "compare_and_write": false, 00:29:59.575 "abort": true, 00:29:59.575 "seek_hole": false, 00:29:59.575 "seek_data": false, 00:29:59.575 "copy": true, 00:29:59.575 "nvme_iov_md": false 00:29:59.575 }, 00:29:59.575 "memory_domains": [ 00:29:59.575 { 00:29:59.575 "dma_device_id": "system", 00:29:59.575 "dma_device_type": 1 00:29:59.575 }, 00:29:59.575 { 00:29:59.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:29:59.575 "dma_device_type": 2 00:29:59.575 } 00:29:59.575 ], 00:29:59.575 "driver_specific": {} 00:29:59.575 } 00:29:59.575 ] 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.575 [2024-12-06 18:28:30.479318] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:29:59.575 [2024-12-06 18:28:30.479515] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:29:59.575 [2024-12-06 18:28:30.479553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:29:59.575 [2024-12-06 18:28:30.481697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:29:59.575 [2024-12-06 18:28:30.481751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:29:59.575 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.835 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:29:59.835 "name": "Existed_Raid", 00:29:59.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.835 "strip_size_kb": 64, 00:29:59.835 "state": "configuring", 00:29:59.835 "raid_level": "concat", 00:29:59.835 "superblock": false, 00:29:59.835 "num_base_bdevs": 4, 00:29:59.835 "num_base_bdevs_discovered": 3, 00:29:59.835 "num_base_bdevs_operational": 4, 00:29:59.835 "base_bdevs_list": [ 00:29:59.835 { 00:29:59.835 "name": "BaseBdev1", 00:29:59.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:29:59.835 "is_configured": false, 00:29:59.835 "data_offset": 0, 00:29:59.835 "data_size": 0 00:29:59.835 }, 00:29:59.835 { 00:29:59.835 "name": "BaseBdev2", 00:29:59.835 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:29:59.835 "is_configured": true, 00:29:59.835 "data_offset": 0, 00:29:59.835 "data_size": 65536 00:29:59.835 }, 00:29:59.835 { 00:29:59.835 "name": "BaseBdev3", 00:29:59.835 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:29:59.835 "is_configured": true, 00:29:59.835 "data_offset": 0, 00:29:59.835 "data_size": 65536 00:29:59.835 }, 00:29:59.835 { 00:29:59.835 "name": "BaseBdev4", 00:29:59.835 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:29:59.835 "is_configured": true, 00:29:59.835 "data_offset": 0, 00:29:59.835 "data_size": 65536 00:29:59.835 } 00:29:59.835 ] 00:29:59.835 }' 00:29:59.835 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:29:59.835 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.095 [2024-12-06 18:28:30.914828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:00.095 "name": "Existed_Raid", 00:30:00.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.095 "strip_size_kb": 64, 00:30:00.095 "state": "configuring", 00:30:00.095 "raid_level": "concat", 00:30:00.095 "superblock": false, 00:30:00.095 "num_base_bdevs": 4, 00:30:00.095 "num_base_bdevs_discovered": 2, 00:30:00.095 "num_base_bdevs_operational": 4, 00:30:00.095 "base_bdevs_list": [ 00:30:00.095 { 00:30:00.095 "name": "BaseBdev1", 00:30:00.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.095 "is_configured": false, 00:30:00.095 "data_offset": 0, 00:30:00.095 "data_size": 0 00:30:00.095 }, 00:30:00.095 { 00:30:00.095 "name": null, 00:30:00.095 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:30:00.095 "is_configured": false, 00:30:00.095 "data_offset": 0, 00:30:00.095 "data_size": 65536 00:30:00.095 }, 00:30:00.095 { 00:30:00.095 "name": "BaseBdev3", 00:30:00.095 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:30:00.095 "is_configured": true, 00:30:00.095 "data_offset": 0, 00:30:00.095 "data_size": 65536 00:30:00.095 }, 00:30:00.095 { 00:30:00.095 "name": "BaseBdev4", 00:30:00.095 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:30:00.095 "is_configured": true, 00:30:00.095 "data_offset": 0, 00:30:00.095 "data_size": 65536 00:30:00.095 } 00:30:00.095 ] 00:30:00.095 }' 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:00.095 18:28:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.664 [2024-12-06 18:28:31.422418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:00.664 BaseBdev1 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.664 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.664 [ 00:30:00.664 { 00:30:00.664 "name": "BaseBdev1", 00:30:00.664 "aliases": [ 00:30:00.664 "55f5950c-90c5-4cd0-84ce-6a341e01a9f3" 00:30:00.664 ], 00:30:00.664 "product_name": "Malloc disk", 00:30:00.664 "block_size": 512, 00:30:00.664 "num_blocks": 65536, 00:30:00.664 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:00.664 "assigned_rate_limits": { 00:30:00.664 "rw_ios_per_sec": 0, 00:30:00.664 "rw_mbytes_per_sec": 0, 00:30:00.664 "r_mbytes_per_sec": 0, 00:30:00.664 "w_mbytes_per_sec": 0 00:30:00.664 }, 00:30:00.664 "claimed": true, 00:30:00.664 "claim_type": "exclusive_write", 00:30:00.664 "zoned": false, 00:30:00.665 "supported_io_types": { 00:30:00.665 "read": true, 00:30:00.665 "write": true, 00:30:00.665 "unmap": true, 00:30:00.665 "flush": true, 00:30:00.665 "reset": true, 00:30:00.665 "nvme_admin": false, 00:30:00.665 "nvme_io": false, 00:30:00.665 "nvme_io_md": false, 00:30:00.665 "write_zeroes": true, 00:30:00.665 "zcopy": true, 00:30:00.665 "get_zone_info": false, 00:30:00.665 "zone_management": false, 00:30:00.665 "zone_append": false, 00:30:00.665 "compare": false, 00:30:00.665 "compare_and_write": false, 00:30:00.665 "abort": true, 00:30:00.665 "seek_hole": false, 00:30:00.665 "seek_data": false, 00:30:00.665 "copy": true, 00:30:00.665 "nvme_iov_md": false 00:30:00.665 }, 00:30:00.665 "memory_domains": [ 00:30:00.665 { 00:30:00.665 "dma_device_id": "system", 00:30:00.665 "dma_device_type": 1 00:30:00.665 }, 00:30:00.665 { 00:30:00.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:00.665 "dma_device_type": 2 00:30:00.665 } 00:30:00.665 ], 00:30:00.665 "driver_specific": {} 00:30:00.665 } 00:30:00.665 ] 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:00.665 "name": "Existed_Raid", 00:30:00.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:00.665 "strip_size_kb": 64, 00:30:00.665 "state": "configuring", 00:30:00.665 "raid_level": "concat", 00:30:00.665 "superblock": false, 00:30:00.665 "num_base_bdevs": 4, 00:30:00.665 "num_base_bdevs_discovered": 3, 00:30:00.665 "num_base_bdevs_operational": 4, 00:30:00.665 "base_bdevs_list": [ 00:30:00.665 { 00:30:00.665 "name": "BaseBdev1", 00:30:00.665 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:00.665 "is_configured": true, 00:30:00.665 "data_offset": 0, 00:30:00.665 "data_size": 65536 00:30:00.665 }, 00:30:00.665 { 00:30:00.665 "name": null, 00:30:00.665 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:30:00.665 "is_configured": false, 00:30:00.665 "data_offset": 0, 00:30:00.665 "data_size": 65536 00:30:00.665 }, 00:30:00.665 { 00:30:00.665 "name": "BaseBdev3", 00:30:00.665 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:30:00.665 "is_configured": true, 00:30:00.665 "data_offset": 0, 00:30:00.665 "data_size": 65536 00:30:00.665 }, 00:30:00.665 { 00:30:00.665 "name": "BaseBdev4", 00:30:00.665 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:30:00.665 "is_configured": true, 00:30:00.665 "data_offset": 0, 00:30:00.665 "data_size": 65536 00:30:00.665 } 00:30:00.665 ] 00:30:00.665 }' 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:00.665 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.234 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.234 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:01.234 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.234 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.234 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.234 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:01.234 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.235 [2024-12-06 18:28:31.941949] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:01.235 "name": "Existed_Raid", 00:30:01.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.235 "strip_size_kb": 64, 00:30:01.235 "state": "configuring", 00:30:01.235 "raid_level": "concat", 00:30:01.235 "superblock": false, 00:30:01.235 "num_base_bdevs": 4, 00:30:01.235 "num_base_bdevs_discovered": 2, 00:30:01.235 "num_base_bdevs_operational": 4, 00:30:01.235 "base_bdevs_list": [ 00:30:01.235 { 00:30:01.235 "name": "BaseBdev1", 00:30:01.235 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:01.235 "is_configured": true, 00:30:01.235 "data_offset": 0, 00:30:01.235 "data_size": 65536 00:30:01.235 }, 00:30:01.235 { 00:30:01.235 "name": null, 00:30:01.235 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:30:01.235 "is_configured": false, 00:30:01.235 "data_offset": 0, 00:30:01.235 "data_size": 65536 00:30:01.235 }, 00:30:01.235 { 00:30:01.235 "name": null, 00:30:01.235 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:30:01.235 "is_configured": false, 00:30:01.235 "data_offset": 0, 00:30:01.235 "data_size": 65536 00:30:01.235 }, 00:30:01.235 { 00:30:01.235 "name": "BaseBdev4", 00:30:01.235 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:30:01.235 "is_configured": true, 00:30:01.235 "data_offset": 0, 00:30:01.235 "data_size": 65536 00:30:01.235 } 00:30:01.235 ] 00:30:01.235 }' 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:01.235 18:28:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.494 [2024-12-06 18:28:32.377921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.494 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:01.494 "name": "Existed_Raid", 00:30:01.494 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:01.494 "strip_size_kb": 64, 00:30:01.494 "state": "configuring", 00:30:01.494 "raid_level": "concat", 00:30:01.494 "superblock": false, 00:30:01.494 "num_base_bdevs": 4, 00:30:01.494 "num_base_bdevs_discovered": 3, 00:30:01.494 "num_base_bdevs_operational": 4, 00:30:01.494 "base_bdevs_list": [ 00:30:01.494 { 00:30:01.494 "name": "BaseBdev1", 00:30:01.494 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:01.494 "is_configured": true, 00:30:01.495 "data_offset": 0, 00:30:01.495 "data_size": 65536 00:30:01.495 }, 00:30:01.495 { 00:30:01.495 "name": null, 00:30:01.495 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:30:01.495 "is_configured": false, 00:30:01.495 "data_offset": 0, 00:30:01.495 "data_size": 65536 00:30:01.495 }, 00:30:01.495 { 00:30:01.495 "name": "BaseBdev3", 00:30:01.495 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:30:01.495 "is_configured": true, 00:30:01.495 "data_offset": 0, 00:30:01.495 "data_size": 65536 00:30:01.495 }, 00:30:01.495 { 00:30:01.495 "name": "BaseBdev4", 00:30:01.495 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:30:01.495 "is_configured": true, 00:30:01.495 "data_offset": 0, 00:30:01.495 "data_size": 65536 00:30:01.495 } 00:30:01.495 ] 00:30:01.495 }' 00:30:01.495 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:01.495 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.063 [2024-12-06 18:28:32.865952] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.063 18:28:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.323 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.323 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.323 "name": "Existed_Raid", 00:30:02.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.323 "strip_size_kb": 64, 00:30:02.323 "state": "configuring", 00:30:02.323 "raid_level": "concat", 00:30:02.323 "superblock": false, 00:30:02.323 "num_base_bdevs": 4, 00:30:02.323 "num_base_bdevs_discovered": 2, 00:30:02.323 "num_base_bdevs_operational": 4, 00:30:02.323 "base_bdevs_list": [ 00:30:02.323 { 00:30:02.323 "name": null, 00:30:02.323 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:02.323 "is_configured": false, 00:30:02.323 "data_offset": 0, 00:30:02.323 "data_size": 65536 00:30:02.323 }, 00:30:02.323 { 00:30:02.323 "name": null, 00:30:02.323 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:30:02.323 "is_configured": false, 00:30:02.323 "data_offset": 0, 00:30:02.323 "data_size": 65536 00:30:02.323 }, 00:30:02.323 { 00:30:02.323 "name": "BaseBdev3", 00:30:02.323 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:30:02.323 "is_configured": true, 00:30:02.323 "data_offset": 0, 00:30:02.323 "data_size": 65536 00:30:02.323 }, 00:30:02.323 { 00:30:02.323 "name": "BaseBdev4", 00:30:02.323 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:30:02.323 "is_configured": true, 00:30:02.323 "data_offset": 0, 00:30:02.323 "data_size": 65536 00:30:02.323 } 00:30:02.323 ] 00:30:02.323 }' 00:30:02.323 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.323 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.582 [2024-12-06 18:28:33.448924] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:02.582 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:02.583 "name": "Existed_Raid", 00:30:02.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:02.583 "strip_size_kb": 64, 00:30:02.583 "state": "configuring", 00:30:02.583 "raid_level": "concat", 00:30:02.583 "superblock": false, 00:30:02.583 "num_base_bdevs": 4, 00:30:02.583 "num_base_bdevs_discovered": 3, 00:30:02.583 "num_base_bdevs_operational": 4, 00:30:02.583 "base_bdevs_list": [ 00:30:02.583 { 00:30:02.583 "name": null, 00:30:02.583 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:02.583 "is_configured": false, 00:30:02.583 "data_offset": 0, 00:30:02.583 "data_size": 65536 00:30:02.583 }, 00:30:02.583 { 00:30:02.583 "name": "BaseBdev2", 00:30:02.583 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:30:02.583 "is_configured": true, 00:30:02.583 "data_offset": 0, 00:30:02.583 "data_size": 65536 00:30:02.583 }, 00:30:02.583 { 00:30:02.583 "name": "BaseBdev3", 00:30:02.583 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:30:02.583 "is_configured": true, 00:30:02.583 "data_offset": 0, 00:30:02.583 "data_size": 65536 00:30:02.583 }, 00:30:02.583 { 00:30:02.583 "name": "BaseBdev4", 00:30:02.583 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:30:02.583 "is_configured": true, 00:30:02.583 "data_offset": 0, 00:30:02.583 "data_size": 65536 00:30:02.583 } 00:30:02.583 ] 00:30:02.583 }' 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:02.583 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 55f5950c-90c5-4cd0-84ce-6a341e01a9f3 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.150 18:28:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.150 [2024-12-06 18:28:34.026674] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:03.150 [2024-12-06 18:28:34.026746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:03.150 [2024-12-06 18:28:34.026756] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:30:03.150 [2024-12-06 18:28:34.027038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:03.150 [2024-12-06 18:28:34.027215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:03.150 [2024-12-06 18:28:34.027230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:03.150 [2024-12-06 18:28:34.027511] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:03.150 NewBaseBdev 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.150 [ 00:30:03.150 { 00:30:03.150 "name": "NewBaseBdev", 00:30:03.150 "aliases": [ 00:30:03.150 "55f5950c-90c5-4cd0-84ce-6a341e01a9f3" 00:30:03.150 ], 00:30:03.150 "product_name": "Malloc disk", 00:30:03.150 "block_size": 512, 00:30:03.150 "num_blocks": 65536, 00:30:03.150 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:03.150 "assigned_rate_limits": { 00:30:03.150 "rw_ios_per_sec": 0, 00:30:03.150 "rw_mbytes_per_sec": 0, 00:30:03.150 "r_mbytes_per_sec": 0, 00:30:03.150 "w_mbytes_per_sec": 0 00:30:03.150 }, 00:30:03.150 "claimed": true, 00:30:03.150 "claim_type": "exclusive_write", 00:30:03.150 "zoned": false, 00:30:03.150 "supported_io_types": { 00:30:03.150 "read": true, 00:30:03.150 "write": true, 00:30:03.150 "unmap": true, 00:30:03.150 "flush": true, 00:30:03.150 "reset": true, 00:30:03.150 "nvme_admin": false, 00:30:03.150 "nvme_io": false, 00:30:03.150 "nvme_io_md": false, 00:30:03.150 "write_zeroes": true, 00:30:03.150 "zcopy": true, 00:30:03.150 "get_zone_info": false, 00:30:03.150 "zone_management": false, 00:30:03.150 "zone_append": false, 00:30:03.150 "compare": false, 00:30:03.150 "compare_and_write": false, 00:30:03.150 "abort": true, 00:30:03.150 "seek_hole": false, 00:30:03.150 "seek_data": false, 00:30:03.150 "copy": true, 00:30:03.150 "nvme_iov_md": false 00:30:03.150 }, 00:30:03.150 "memory_domains": [ 00:30:03.150 { 00:30:03.150 "dma_device_id": "system", 00:30:03.150 "dma_device_type": 1 00:30:03.150 }, 00:30:03.150 { 00:30:03.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:03.150 "dma_device_type": 2 00:30:03.150 } 00:30:03.150 ], 00:30:03.150 "driver_specific": {} 00:30:03.150 } 00:30:03.150 ] 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:03.150 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.409 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.409 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:03.410 "name": "Existed_Raid", 00:30:03.410 "uuid": "d2381341-d878-41b1-931f-ab6b8159cf1a", 00:30:03.410 "strip_size_kb": 64, 00:30:03.410 "state": "online", 00:30:03.410 "raid_level": "concat", 00:30:03.410 "superblock": false, 00:30:03.410 "num_base_bdevs": 4, 00:30:03.410 "num_base_bdevs_discovered": 4, 00:30:03.410 "num_base_bdevs_operational": 4, 00:30:03.410 "base_bdevs_list": [ 00:30:03.410 { 00:30:03.410 "name": "NewBaseBdev", 00:30:03.410 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:03.410 "is_configured": true, 00:30:03.410 "data_offset": 0, 00:30:03.410 "data_size": 65536 00:30:03.410 }, 00:30:03.410 { 00:30:03.410 "name": "BaseBdev2", 00:30:03.410 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:30:03.410 "is_configured": true, 00:30:03.410 "data_offset": 0, 00:30:03.410 "data_size": 65536 00:30:03.410 }, 00:30:03.410 { 00:30:03.410 "name": "BaseBdev3", 00:30:03.410 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:30:03.410 "is_configured": true, 00:30:03.410 "data_offset": 0, 00:30:03.410 "data_size": 65536 00:30:03.410 }, 00:30:03.410 { 00:30:03.410 "name": "BaseBdev4", 00:30:03.410 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:30:03.410 "is_configured": true, 00:30:03.410 "data_offset": 0, 00:30:03.410 "data_size": 65536 00:30:03.410 } 00:30:03.410 ] 00:30:03.410 }' 00:30:03.410 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:03.410 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.669 [2024-12-06 18:28:34.554382] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:03.669 "name": "Existed_Raid", 00:30:03.669 "aliases": [ 00:30:03.669 "d2381341-d878-41b1-931f-ab6b8159cf1a" 00:30:03.669 ], 00:30:03.669 "product_name": "Raid Volume", 00:30:03.669 "block_size": 512, 00:30:03.669 "num_blocks": 262144, 00:30:03.669 "uuid": "d2381341-d878-41b1-931f-ab6b8159cf1a", 00:30:03.669 "assigned_rate_limits": { 00:30:03.669 "rw_ios_per_sec": 0, 00:30:03.669 "rw_mbytes_per_sec": 0, 00:30:03.669 "r_mbytes_per_sec": 0, 00:30:03.669 "w_mbytes_per_sec": 0 00:30:03.669 }, 00:30:03.669 "claimed": false, 00:30:03.669 "zoned": false, 00:30:03.669 "supported_io_types": { 00:30:03.669 "read": true, 00:30:03.669 "write": true, 00:30:03.669 "unmap": true, 00:30:03.669 "flush": true, 00:30:03.669 "reset": true, 00:30:03.669 "nvme_admin": false, 00:30:03.669 "nvme_io": false, 00:30:03.669 "nvme_io_md": false, 00:30:03.669 "write_zeroes": true, 00:30:03.669 "zcopy": false, 00:30:03.669 "get_zone_info": false, 00:30:03.669 "zone_management": false, 00:30:03.669 "zone_append": false, 00:30:03.669 "compare": false, 00:30:03.669 "compare_and_write": false, 00:30:03.669 "abort": false, 00:30:03.669 "seek_hole": false, 00:30:03.669 "seek_data": false, 00:30:03.669 "copy": false, 00:30:03.669 "nvme_iov_md": false 00:30:03.669 }, 00:30:03.669 "memory_domains": [ 00:30:03.669 { 00:30:03.669 "dma_device_id": "system", 00:30:03.669 "dma_device_type": 1 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:03.669 "dma_device_type": 2 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "dma_device_id": "system", 00:30:03.669 "dma_device_type": 1 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:03.669 "dma_device_type": 2 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "dma_device_id": "system", 00:30:03.669 "dma_device_type": 1 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:03.669 "dma_device_type": 2 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "dma_device_id": "system", 00:30:03.669 "dma_device_type": 1 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:03.669 "dma_device_type": 2 00:30:03.669 } 00:30:03.669 ], 00:30:03.669 "driver_specific": { 00:30:03.669 "raid": { 00:30:03.669 "uuid": "d2381341-d878-41b1-931f-ab6b8159cf1a", 00:30:03.669 "strip_size_kb": 64, 00:30:03.669 "state": "online", 00:30:03.669 "raid_level": "concat", 00:30:03.669 "superblock": false, 00:30:03.669 "num_base_bdevs": 4, 00:30:03.669 "num_base_bdevs_discovered": 4, 00:30:03.669 "num_base_bdevs_operational": 4, 00:30:03.669 "base_bdevs_list": [ 00:30:03.669 { 00:30:03.669 "name": "NewBaseBdev", 00:30:03.669 "uuid": "55f5950c-90c5-4cd0-84ce-6a341e01a9f3", 00:30:03.669 "is_configured": true, 00:30:03.669 "data_offset": 0, 00:30:03.669 "data_size": 65536 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "name": "BaseBdev2", 00:30:03.669 "uuid": "4ec1d816-ba20-4ec3-ac15-93ae6ba57ebd", 00:30:03.669 "is_configured": true, 00:30:03.669 "data_offset": 0, 00:30:03.669 "data_size": 65536 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "name": "BaseBdev3", 00:30:03.669 "uuid": "12559475-faf8-4be6-a299-dc564d594e64", 00:30:03.669 "is_configured": true, 00:30:03.669 "data_offset": 0, 00:30:03.669 "data_size": 65536 00:30:03.669 }, 00:30:03.669 { 00:30:03.669 "name": "BaseBdev4", 00:30:03.669 "uuid": "85fcf9c8-c718-4e61-a955-79d210445712", 00:30:03.669 "is_configured": true, 00:30:03.669 "data_offset": 0, 00:30:03.669 "data_size": 65536 00:30:03.669 } 00:30:03.669 ] 00:30:03.669 } 00:30:03.669 } 00:30:03.669 }' 00:30:03.669 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:03.928 BaseBdev2 00:30:03.928 BaseBdev3 00:30:03.928 BaseBdev4' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:03.928 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:04.189 [2024-12-06 18:28:34.885801] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:04.189 [2024-12-06 18:28:34.885952] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:04.189 [2024-12-06 18:28:34.886050] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:04.189 [2024-12-06 18:28:34.886126] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:04.189 [2024-12-06 18:28:34.886139] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 70999 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 70999 ']' 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 70999 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70999 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.189 killing process with pid 70999 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70999' 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 70999 00:30:04.189 [2024-12-06 18:28:34.943239] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:04.189 18:28:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 70999 00:30:04.476 [2024-12-06 18:28:35.348561] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:30:05.852 ************************************ 00:30:05.852 END TEST raid_state_function_test 00:30:05.852 ************************************ 00:30:05.852 00:30:05.852 real 0m11.707s 00:30:05.852 user 0m18.471s 00:30:05.852 sys 0m2.591s 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:05.852 18:28:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:30:05.852 18:28:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:05.852 18:28:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:05.852 18:28:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:05.852 ************************************ 00:30:05.852 START TEST raid_state_function_test_sb 00:30:05.852 ************************************ 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:05.852 Process raid pid: 71679 00:30:05.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71679 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71679' 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71679 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71679 ']' 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.852 18:28:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:05.852 [2024-12-06 18:28:36.671750] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:05.852 [2024-12-06 18:28:36.672087] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.112 [2024-12-06 18:28:36.847949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:06.112 [2024-12-06 18:28:36.966192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.371 [2024-12-06 18:28:37.180014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:06.371 [2024-12-06 18:28:37.180238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.630 [2024-12-06 18:28:37.528404] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:06.630 [2024-12-06 18:28:37.528466] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:06.630 [2024-12-06 18:28:37.528478] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:06.630 [2024-12-06 18:28:37.528491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:06.630 [2024-12-06 18:28:37.528506] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:06.630 [2024-12-06 18:28:37.528518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:06.630 [2024-12-06 18:28:37.528526] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:06.630 [2024-12-06 18:28:37.528538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:06.630 "name": "Existed_Raid", 00:30:06.630 "uuid": "394a2f49-f895-4f58-b43b-b21515abaa75", 00:30:06.630 "strip_size_kb": 64, 00:30:06.630 "state": "configuring", 00:30:06.630 "raid_level": "concat", 00:30:06.630 "superblock": true, 00:30:06.630 "num_base_bdevs": 4, 00:30:06.630 "num_base_bdevs_discovered": 0, 00:30:06.630 "num_base_bdevs_operational": 4, 00:30:06.630 "base_bdevs_list": [ 00:30:06.630 { 00:30:06.630 "name": "BaseBdev1", 00:30:06.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.630 "is_configured": false, 00:30:06.630 "data_offset": 0, 00:30:06.630 "data_size": 0 00:30:06.630 }, 00:30:06.630 { 00:30:06.630 "name": "BaseBdev2", 00:30:06.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.630 "is_configured": false, 00:30:06.630 "data_offset": 0, 00:30:06.630 "data_size": 0 00:30:06.630 }, 00:30:06.630 { 00:30:06.630 "name": "BaseBdev3", 00:30:06.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.630 "is_configured": false, 00:30:06.630 "data_offset": 0, 00:30:06.630 "data_size": 0 00:30:06.630 }, 00:30:06.630 { 00:30:06.630 "name": "BaseBdev4", 00:30:06.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:06.630 "is_configured": false, 00:30:06.630 "data_offset": 0, 00:30:06.630 "data_size": 0 00:30:06.630 } 00:30:06.630 ] 00:30:06.630 }' 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:06.630 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.198 [2024-12-06 18:28:37.967737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:07.198 [2024-12-06 18:28:37.967781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.198 [2024-12-06 18:28:37.979722] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:07.198 [2024-12-06 18:28:37.979771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:07.198 [2024-12-06 18:28:37.979782] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:07.198 [2024-12-06 18:28:37.979794] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:07.198 [2024-12-06 18:28:37.979803] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:07.198 [2024-12-06 18:28:37.979815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:07.198 [2024-12-06 18:28:37.979822] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:07.198 [2024-12-06 18:28:37.979834] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.198 18:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.198 [2024-12-06 18:28:38.029797] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:07.198 BaseBdev1 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.198 [ 00:30:07.198 { 00:30:07.198 "name": "BaseBdev1", 00:30:07.198 "aliases": [ 00:30:07.198 "5cf5fc8b-85dd-432d-9a7c-ce1a3511e68d" 00:30:07.198 ], 00:30:07.198 "product_name": "Malloc disk", 00:30:07.198 "block_size": 512, 00:30:07.198 "num_blocks": 65536, 00:30:07.198 "uuid": "5cf5fc8b-85dd-432d-9a7c-ce1a3511e68d", 00:30:07.198 "assigned_rate_limits": { 00:30:07.198 "rw_ios_per_sec": 0, 00:30:07.198 "rw_mbytes_per_sec": 0, 00:30:07.198 "r_mbytes_per_sec": 0, 00:30:07.198 "w_mbytes_per_sec": 0 00:30:07.198 }, 00:30:07.198 "claimed": true, 00:30:07.198 "claim_type": "exclusive_write", 00:30:07.198 "zoned": false, 00:30:07.198 "supported_io_types": { 00:30:07.198 "read": true, 00:30:07.198 "write": true, 00:30:07.198 "unmap": true, 00:30:07.198 "flush": true, 00:30:07.198 "reset": true, 00:30:07.198 "nvme_admin": false, 00:30:07.198 "nvme_io": false, 00:30:07.198 "nvme_io_md": false, 00:30:07.198 "write_zeroes": true, 00:30:07.198 "zcopy": true, 00:30:07.198 "get_zone_info": false, 00:30:07.198 "zone_management": false, 00:30:07.198 "zone_append": false, 00:30:07.198 "compare": false, 00:30:07.198 "compare_and_write": false, 00:30:07.198 "abort": true, 00:30:07.198 "seek_hole": false, 00:30:07.198 "seek_data": false, 00:30:07.198 "copy": true, 00:30:07.198 "nvme_iov_md": false 00:30:07.198 }, 00:30:07.198 "memory_domains": [ 00:30:07.198 { 00:30:07.198 "dma_device_id": "system", 00:30:07.198 "dma_device_type": 1 00:30:07.198 }, 00:30:07.198 { 00:30:07.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:07.198 "dma_device_type": 2 00:30:07.198 } 00:30:07.198 ], 00:30:07.198 "driver_specific": {} 00:30:07.198 } 00:30:07.198 ] 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:07.198 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:07.199 "name": "Existed_Raid", 00:30:07.199 "uuid": "a16ab39c-79ae-48f7-9185-f0b53891511a", 00:30:07.199 "strip_size_kb": 64, 00:30:07.199 "state": "configuring", 00:30:07.199 "raid_level": "concat", 00:30:07.199 "superblock": true, 00:30:07.199 "num_base_bdevs": 4, 00:30:07.199 "num_base_bdevs_discovered": 1, 00:30:07.199 "num_base_bdevs_operational": 4, 00:30:07.199 "base_bdevs_list": [ 00:30:07.199 { 00:30:07.199 "name": "BaseBdev1", 00:30:07.199 "uuid": "5cf5fc8b-85dd-432d-9a7c-ce1a3511e68d", 00:30:07.199 "is_configured": true, 00:30:07.199 "data_offset": 2048, 00:30:07.199 "data_size": 63488 00:30:07.199 }, 00:30:07.199 { 00:30:07.199 "name": "BaseBdev2", 00:30:07.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.199 "is_configured": false, 00:30:07.199 "data_offset": 0, 00:30:07.199 "data_size": 0 00:30:07.199 }, 00:30:07.199 { 00:30:07.199 "name": "BaseBdev3", 00:30:07.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.199 "is_configured": false, 00:30:07.199 "data_offset": 0, 00:30:07.199 "data_size": 0 00:30:07.199 }, 00:30:07.199 { 00:30:07.199 "name": "BaseBdev4", 00:30:07.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.199 "is_configured": false, 00:30:07.199 "data_offset": 0, 00:30:07.199 "data_size": 0 00:30:07.199 } 00:30:07.199 ] 00:30:07.199 }' 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:07.199 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.767 [2024-12-06 18:28:38.509643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:07.767 [2024-12-06 18:28:38.509707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.767 [2024-12-06 18:28:38.521704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:07.767 [2024-12-06 18:28:38.523883] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:07.767 [2024-12-06 18:28:38.523934] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:07.767 [2024-12-06 18:28:38.523945] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:07.767 [2024-12-06 18:28:38.523960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:07.767 [2024-12-06 18:28:38.523968] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:07.767 [2024-12-06 18:28:38.523979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:07.767 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:07.767 "name": "Existed_Raid", 00:30:07.767 "uuid": "ed1619dc-c8ec-4983-bfdd-0fdee69a4af3", 00:30:07.767 "strip_size_kb": 64, 00:30:07.767 "state": "configuring", 00:30:07.767 "raid_level": "concat", 00:30:07.767 "superblock": true, 00:30:07.767 "num_base_bdevs": 4, 00:30:07.767 "num_base_bdevs_discovered": 1, 00:30:07.767 "num_base_bdevs_operational": 4, 00:30:07.767 "base_bdevs_list": [ 00:30:07.767 { 00:30:07.767 "name": "BaseBdev1", 00:30:07.767 "uuid": "5cf5fc8b-85dd-432d-9a7c-ce1a3511e68d", 00:30:07.767 "is_configured": true, 00:30:07.767 "data_offset": 2048, 00:30:07.767 "data_size": 63488 00:30:07.767 }, 00:30:07.767 { 00:30:07.767 "name": "BaseBdev2", 00:30:07.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.767 "is_configured": false, 00:30:07.767 "data_offset": 0, 00:30:07.767 "data_size": 0 00:30:07.767 }, 00:30:07.767 { 00:30:07.767 "name": "BaseBdev3", 00:30:07.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.767 "is_configured": false, 00:30:07.767 "data_offset": 0, 00:30:07.767 "data_size": 0 00:30:07.767 }, 00:30:07.767 { 00:30:07.767 "name": "BaseBdev4", 00:30:07.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:07.767 "is_configured": false, 00:30:07.767 "data_offset": 0, 00:30:07.768 "data_size": 0 00:30:07.768 } 00:30:07.768 ] 00:30:07.768 }' 00:30:07.768 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:07.768 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.026 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:08.026 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.026 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.285 [2024-12-06 18:28:38.993075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:08.285 BaseBdev2 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.285 18:28:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.285 [ 00:30:08.285 { 00:30:08.285 "name": "BaseBdev2", 00:30:08.285 "aliases": [ 00:30:08.285 "393b87bd-4c81-4ccb-89cd-56918ac1c1f8" 00:30:08.285 ], 00:30:08.285 "product_name": "Malloc disk", 00:30:08.285 "block_size": 512, 00:30:08.285 "num_blocks": 65536, 00:30:08.285 "uuid": "393b87bd-4c81-4ccb-89cd-56918ac1c1f8", 00:30:08.285 "assigned_rate_limits": { 00:30:08.285 "rw_ios_per_sec": 0, 00:30:08.285 "rw_mbytes_per_sec": 0, 00:30:08.285 "r_mbytes_per_sec": 0, 00:30:08.285 "w_mbytes_per_sec": 0 00:30:08.285 }, 00:30:08.285 "claimed": true, 00:30:08.285 "claim_type": "exclusive_write", 00:30:08.285 "zoned": false, 00:30:08.285 "supported_io_types": { 00:30:08.285 "read": true, 00:30:08.285 "write": true, 00:30:08.285 "unmap": true, 00:30:08.285 "flush": true, 00:30:08.285 "reset": true, 00:30:08.285 "nvme_admin": false, 00:30:08.285 "nvme_io": false, 00:30:08.285 "nvme_io_md": false, 00:30:08.285 "write_zeroes": true, 00:30:08.285 "zcopy": true, 00:30:08.285 "get_zone_info": false, 00:30:08.285 "zone_management": false, 00:30:08.285 "zone_append": false, 00:30:08.285 "compare": false, 00:30:08.285 "compare_and_write": false, 00:30:08.285 "abort": true, 00:30:08.285 "seek_hole": false, 00:30:08.285 "seek_data": false, 00:30:08.285 "copy": true, 00:30:08.285 "nvme_iov_md": false 00:30:08.285 }, 00:30:08.285 "memory_domains": [ 00:30:08.285 { 00:30:08.285 "dma_device_id": "system", 00:30:08.285 "dma_device_type": 1 00:30:08.285 }, 00:30:08.285 { 00:30:08.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.285 "dma_device_type": 2 00:30:08.285 } 00:30:08.285 ], 00:30:08.285 "driver_specific": {} 00:30:08.285 } 00:30:08.285 ] 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:08.285 "name": "Existed_Raid", 00:30:08.285 "uuid": "ed1619dc-c8ec-4983-bfdd-0fdee69a4af3", 00:30:08.285 "strip_size_kb": 64, 00:30:08.285 "state": "configuring", 00:30:08.285 "raid_level": "concat", 00:30:08.285 "superblock": true, 00:30:08.285 "num_base_bdevs": 4, 00:30:08.285 "num_base_bdevs_discovered": 2, 00:30:08.285 "num_base_bdevs_operational": 4, 00:30:08.285 "base_bdevs_list": [ 00:30:08.285 { 00:30:08.285 "name": "BaseBdev1", 00:30:08.285 "uuid": "5cf5fc8b-85dd-432d-9a7c-ce1a3511e68d", 00:30:08.285 "is_configured": true, 00:30:08.285 "data_offset": 2048, 00:30:08.285 "data_size": 63488 00:30:08.285 }, 00:30:08.285 { 00:30:08.285 "name": "BaseBdev2", 00:30:08.285 "uuid": "393b87bd-4c81-4ccb-89cd-56918ac1c1f8", 00:30:08.285 "is_configured": true, 00:30:08.285 "data_offset": 2048, 00:30:08.285 "data_size": 63488 00:30:08.285 }, 00:30:08.285 { 00:30:08.285 "name": "BaseBdev3", 00:30:08.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.285 "is_configured": false, 00:30:08.285 "data_offset": 0, 00:30:08.285 "data_size": 0 00:30:08.285 }, 00:30:08.285 { 00:30:08.285 "name": "BaseBdev4", 00:30:08.285 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.285 "is_configured": false, 00:30:08.285 "data_offset": 0, 00:30:08.285 "data_size": 0 00:30:08.285 } 00:30:08.285 ] 00:30:08.285 }' 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:08.285 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.544 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:08.544 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.544 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.817 [2024-12-06 18:28:39.542709] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:08.817 BaseBdev3 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.817 [ 00:30:08.817 { 00:30:08.817 "name": "BaseBdev3", 00:30:08.817 "aliases": [ 00:30:08.817 "db5dc3d8-8e90-4f7f-941c-6a9bc4556e30" 00:30:08.817 ], 00:30:08.817 "product_name": "Malloc disk", 00:30:08.817 "block_size": 512, 00:30:08.817 "num_blocks": 65536, 00:30:08.817 "uuid": "db5dc3d8-8e90-4f7f-941c-6a9bc4556e30", 00:30:08.817 "assigned_rate_limits": { 00:30:08.817 "rw_ios_per_sec": 0, 00:30:08.817 "rw_mbytes_per_sec": 0, 00:30:08.817 "r_mbytes_per_sec": 0, 00:30:08.817 "w_mbytes_per_sec": 0 00:30:08.817 }, 00:30:08.817 "claimed": true, 00:30:08.817 "claim_type": "exclusive_write", 00:30:08.817 "zoned": false, 00:30:08.817 "supported_io_types": { 00:30:08.817 "read": true, 00:30:08.817 "write": true, 00:30:08.817 "unmap": true, 00:30:08.817 "flush": true, 00:30:08.817 "reset": true, 00:30:08.817 "nvme_admin": false, 00:30:08.817 "nvme_io": false, 00:30:08.817 "nvme_io_md": false, 00:30:08.817 "write_zeroes": true, 00:30:08.817 "zcopy": true, 00:30:08.817 "get_zone_info": false, 00:30:08.817 "zone_management": false, 00:30:08.817 "zone_append": false, 00:30:08.817 "compare": false, 00:30:08.817 "compare_and_write": false, 00:30:08.817 "abort": true, 00:30:08.817 "seek_hole": false, 00:30:08.817 "seek_data": false, 00:30:08.817 "copy": true, 00:30:08.817 "nvme_iov_md": false 00:30:08.817 }, 00:30:08.817 "memory_domains": [ 00:30:08.817 { 00:30:08.817 "dma_device_id": "system", 00:30:08.817 "dma_device_type": 1 00:30:08.817 }, 00:30:08.817 { 00:30:08.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:08.817 "dma_device_type": 2 00:30:08.817 } 00:30:08.817 ], 00:30:08.817 "driver_specific": {} 00:30:08.817 } 00:30:08.817 ] 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:08.817 "name": "Existed_Raid", 00:30:08.817 "uuid": "ed1619dc-c8ec-4983-bfdd-0fdee69a4af3", 00:30:08.817 "strip_size_kb": 64, 00:30:08.817 "state": "configuring", 00:30:08.817 "raid_level": "concat", 00:30:08.817 "superblock": true, 00:30:08.817 "num_base_bdevs": 4, 00:30:08.817 "num_base_bdevs_discovered": 3, 00:30:08.817 "num_base_bdevs_operational": 4, 00:30:08.817 "base_bdevs_list": [ 00:30:08.817 { 00:30:08.817 "name": "BaseBdev1", 00:30:08.817 "uuid": "5cf5fc8b-85dd-432d-9a7c-ce1a3511e68d", 00:30:08.817 "is_configured": true, 00:30:08.817 "data_offset": 2048, 00:30:08.817 "data_size": 63488 00:30:08.817 }, 00:30:08.817 { 00:30:08.817 "name": "BaseBdev2", 00:30:08.817 "uuid": "393b87bd-4c81-4ccb-89cd-56918ac1c1f8", 00:30:08.817 "is_configured": true, 00:30:08.817 "data_offset": 2048, 00:30:08.817 "data_size": 63488 00:30:08.817 }, 00:30:08.817 { 00:30:08.817 "name": "BaseBdev3", 00:30:08.817 "uuid": "db5dc3d8-8e90-4f7f-941c-6a9bc4556e30", 00:30:08.817 "is_configured": true, 00:30:08.817 "data_offset": 2048, 00:30:08.817 "data_size": 63488 00:30:08.817 }, 00:30:08.817 { 00:30:08.817 "name": "BaseBdev4", 00:30:08.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:08.817 "is_configured": false, 00:30:08.817 "data_offset": 0, 00:30:08.817 "data_size": 0 00:30:08.817 } 00:30:08.817 ] 00:30:08.817 }' 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:08.817 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.075 18:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:09.075 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.075 18:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.332 [2024-12-06 18:28:40.040588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:09.332 [2024-12-06 18:28:40.040912] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:09.332 [2024-12-06 18:28:40.040929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:30:09.332 [2024-12-06 18:28:40.041267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:09.332 BaseBdev4 00:30:09.332 [2024-12-06 18:28:40.041454] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:09.332 [2024-12-06 18:28:40.041468] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:09.332 [2024-12-06 18:28:40.041624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.332 [ 00:30:09.332 { 00:30:09.332 "name": "BaseBdev4", 00:30:09.332 "aliases": [ 00:30:09.332 "ce34f67f-1cce-426a-94c3-a048343b2337" 00:30:09.332 ], 00:30:09.332 "product_name": "Malloc disk", 00:30:09.332 "block_size": 512, 00:30:09.332 "num_blocks": 65536, 00:30:09.332 "uuid": "ce34f67f-1cce-426a-94c3-a048343b2337", 00:30:09.332 "assigned_rate_limits": { 00:30:09.332 "rw_ios_per_sec": 0, 00:30:09.332 "rw_mbytes_per_sec": 0, 00:30:09.332 "r_mbytes_per_sec": 0, 00:30:09.332 "w_mbytes_per_sec": 0 00:30:09.332 }, 00:30:09.332 "claimed": true, 00:30:09.332 "claim_type": "exclusive_write", 00:30:09.332 "zoned": false, 00:30:09.332 "supported_io_types": { 00:30:09.332 "read": true, 00:30:09.332 "write": true, 00:30:09.332 "unmap": true, 00:30:09.332 "flush": true, 00:30:09.332 "reset": true, 00:30:09.332 "nvme_admin": false, 00:30:09.332 "nvme_io": false, 00:30:09.332 "nvme_io_md": false, 00:30:09.332 "write_zeroes": true, 00:30:09.332 "zcopy": true, 00:30:09.332 "get_zone_info": false, 00:30:09.332 "zone_management": false, 00:30:09.332 "zone_append": false, 00:30:09.332 "compare": false, 00:30:09.332 "compare_and_write": false, 00:30:09.332 "abort": true, 00:30:09.332 "seek_hole": false, 00:30:09.332 "seek_data": false, 00:30:09.332 "copy": true, 00:30:09.332 "nvme_iov_md": false 00:30:09.332 }, 00:30:09.332 "memory_domains": [ 00:30:09.332 { 00:30:09.332 "dma_device_id": "system", 00:30:09.332 "dma_device_type": 1 00:30:09.332 }, 00:30:09.332 { 00:30:09.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.332 "dma_device_type": 2 00:30:09.332 } 00:30:09.332 ], 00:30:09.332 "driver_specific": {} 00:30:09.332 } 00:30:09.332 ] 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:09.332 "name": "Existed_Raid", 00:30:09.332 "uuid": "ed1619dc-c8ec-4983-bfdd-0fdee69a4af3", 00:30:09.332 "strip_size_kb": 64, 00:30:09.332 "state": "online", 00:30:09.332 "raid_level": "concat", 00:30:09.332 "superblock": true, 00:30:09.332 "num_base_bdevs": 4, 00:30:09.332 "num_base_bdevs_discovered": 4, 00:30:09.332 "num_base_bdevs_operational": 4, 00:30:09.332 "base_bdevs_list": [ 00:30:09.332 { 00:30:09.332 "name": "BaseBdev1", 00:30:09.332 "uuid": "5cf5fc8b-85dd-432d-9a7c-ce1a3511e68d", 00:30:09.332 "is_configured": true, 00:30:09.332 "data_offset": 2048, 00:30:09.332 "data_size": 63488 00:30:09.332 }, 00:30:09.332 { 00:30:09.332 "name": "BaseBdev2", 00:30:09.332 "uuid": "393b87bd-4c81-4ccb-89cd-56918ac1c1f8", 00:30:09.332 "is_configured": true, 00:30:09.332 "data_offset": 2048, 00:30:09.332 "data_size": 63488 00:30:09.332 }, 00:30:09.332 { 00:30:09.332 "name": "BaseBdev3", 00:30:09.332 "uuid": "db5dc3d8-8e90-4f7f-941c-6a9bc4556e30", 00:30:09.332 "is_configured": true, 00:30:09.332 "data_offset": 2048, 00:30:09.332 "data_size": 63488 00:30:09.332 }, 00:30:09.332 { 00:30:09.332 "name": "BaseBdev4", 00:30:09.332 "uuid": "ce34f67f-1cce-426a-94c3-a048343b2337", 00:30:09.332 "is_configured": true, 00:30:09.332 "data_offset": 2048, 00:30:09.332 "data_size": 63488 00:30:09.332 } 00:30:09.332 ] 00:30:09.332 }' 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:09.332 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.900 [2024-12-06 18:28:40.556329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:09.900 "name": "Existed_Raid", 00:30:09.900 "aliases": [ 00:30:09.900 "ed1619dc-c8ec-4983-bfdd-0fdee69a4af3" 00:30:09.900 ], 00:30:09.900 "product_name": "Raid Volume", 00:30:09.900 "block_size": 512, 00:30:09.900 "num_blocks": 253952, 00:30:09.900 "uuid": "ed1619dc-c8ec-4983-bfdd-0fdee69a4af3", 00:30:09.900 "assigned_rate_limits": { 00:30:09.900 "rw_ios_per_sec": 0, 00:30:09.900 "rw_mbytes_per_sec": 0, 00:30:09.900 "r_mbytes_per_sec": 0, 00:30:09.900 "w_mbytes_per_sec": 0 00:30:09.900 }, 00:30:09.900 "claimed": false, 00:30:09.900 "zoned": false, 00:30:09.900 "supported_io_types": { 00:30:09.900 "read": true, 00:30:09.900 "write": true, 00:30:09.900 "unmap": true, 00:30:09.900 "flush": true, 00:30:09.900 "reset": true, 00:30:09.900 "nvme_admin": false, 00:30:09.900 "nvme_io": false, 00:30:09.900 "nvme_io_md": false, 00:30:09.900 "write_zeroes": true, 00:30:09.900 "zcopy": false, 00:30:09.900 "get_zone_info": false, 00:30:09.900 "zone_management": false, 00:30:09.900 "zone_append": false, 00:30:09.900 "compare": false, 00:30:09.900 "compare_and_write": false, 00:30:09.900 "abort": false, 00:30:09.900 "seek_hole": false, 00:30:09.900 "seek_data": false, 00:30:09.900 "copy": false, 00:30:09.900 "nvme_iov_md": false 00:30:09.900 }, 00:30:09.900 "memory_domains": [ 00:30:09.900 { 00:30:09.900 "dma_device_id": "system", 00:30:09.900 "dma_device_type": 1 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.900 "dma_device_type": 2 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "dma_device_id": "system", 00:30:09.900 "dma_device_type": 1 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.900 "dma_device_type": 2 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "dma_device_id": "system", 00:30:09.900 "dma_device_type": 1 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.900 "dma_device_type": 2 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "dma_device_id": "system", 00:30:09.900 "dma_device_type": 1 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:09.900 "dma_device_type": 2 00:30:09.900 } 00:30:09.900 ], 00:30:09.900 "driver_specific": { 00:30:09.900 "raid": { 00:30:09.900 "uuid": "ed1619dc-c8ec-4983-bfdd-0fdee69a4af3", 00:30:09.900 "strip_size_kb": 64, 00:30:09.900 "state": "online", 00:30:09.900 "raid_level": "concat", 00:30:09.900 "superblock": true, 00:30:09.900 "num_base_bdevs": 4, 00:30:09.900 "num_base_bdevs_discovered": 4, 00:30:09.900 "num_base_bdevs_operational": 4, 00:30:09.900 "base_bdevs_list": [ 00:30:09.900 { 00:30:09.900 "name": "BaseBdev1", 00:30:09.900 "uuid": "5cf5fc8b-85dd-432d-9a7c-ce1a3511e68d", 00:30:09.900 "is_configured": true, 00:30:09.900 "data_offset": 2048, 00:30:09.900 "data_size": 63488 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "name": "BaseBdev2", 00:30:09.900 "uuid": "393b87bd-4c81-4ccb-89cd-56918ac1c1f8", 00:30:09.900 "is_configured": true, 00:30:09.900 "data_offset": 2048, 00:30:09.900 "data_size": 63488 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "name": "BaseBdev3", 00:30:09.900 "uuid": "db5dc3d8-8e90-4f7f-941c-6a9bc4556e30", 00:30:09.900 "is_configured": true, 00:30:09.900 "data_offset": 2048, 00:30:09.900 "data_size": 63488 00:30:09.900 }, 00:30:09.900 { 00:30:09.900 "name": "BaseBdev4", 00:30:09.900 "uuid": "ce34f67f-1cce-426a-94c3-a048343b2337", 00:30:09.900 "is_configured": true, 00:30:09.900 "data_offset": 2048, 00:30:09.900 "data_size": 63488 00:30:09.900 } 00:30:09.900 ] 00:30:09.900 } 00:30:09.900 } 00:30:09.900 }' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:09.900 BaseBdev2 00:30:09.900 BaseBdev3 00:30:09.900 BaseBdev4' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:09.900 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:09.901 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:09.901 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:09.901 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.901 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.160 [2024-12-06 18:28:40.875543] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:10.160 [2024-12-06 18:28:40.875577] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:10.160 [2024-12-06 18:28:40.875643] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.160 18:28:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.160 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.160 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:10.160 "name": "Existed_Raid", 00:30:10.160 "uuid": "ed1619dc-c8ec-4983-bfdd-0fdee69a4af3", 00:30:10.160 "strip_size_kb": 64, 00:30:10.160 "state": "offline", 00:30:10.160 "raid_level": "concat", 00:30:10.160 "superblock": true, 00:30:10.160 "num_base_bdevs": 4, 00:30:10.160 "num_base_bdevs_discovered": 3, 00:30:10.160 "num_base_bdevs_operational": 3, 00:30:10.160 "base_bdevs_list": [ 00:30:10.160 { 00:30:10.160 "name": null, 00:30:10.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:10.160 "is_configured": false, 00:30:10.160 "data_offset": 0, 00:30:10.160 "data_size": 63488 00:30:10.160 }, 00:30:10.160 { 00:30:10.160 "name": "BaseBdev2", 00:30:10.160 "uuid": "393b87bd-4c81-4ccb-89cd-56918ac1c1f8", 00:30:10.160 "is_configured": true, 00:30:10.160 "data_offset": 2048, 00:30:10.160 "data_size": 63488 00:30:10.160 }, 00:30:10.160 { 00:30:10.160 "name": "BaseBdev3", 00:30:10.160 "uuid": "db5dc3d8-8e90-4f7f-941c-6a9bc4556e30", 00:30:10.160 "is_configured": true, 00:30:10.160 "data_offset": 2048, 00:30:10.160 "data_size": 63488 00:30:10.160 }, 00:30:10.160 { 00:30:10.160 "name": "BaseBdev4", 00:30:10.160 "uuid": "ce34f67f-1cce-426a-94c3-a048343b2337", 00:30:10.160 "is_configured": true, 00:30:10.160 "data_offset": 2048, 00:30:10.160 "data_size": 63488 00:30:10.160 } 00:30:10.160 ] 00:30:10.160 }' 00:30:10.160 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:10.160 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 [2024-12-06 18:28:41.454055] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.739 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.739 [2024-12-06 18:28:41.606923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.999 [2024-12-06 18:28:41.757993] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:10.999 [2024-12-06 18:28:41.758046] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.999 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 BaseBdev2 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 [ 00:30:11.259 { 00:30:11.259 "name": "BaseBdev2", 00:30:11.259 "aliases": [ 00:30:11.259 "379b02bb-45e0-4c35-93c4-11509ce89b35" 00:30:11.259 ], 00:30:11.259 "product_name": "Malloc disk", 00:30:11.259 "block_size": 512, 00:30:11.259 "num_blocks": 65536, 00:30:11.259 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:11.259 "assigned_rate_limits": { 00:30:11.259 "rw_ios_per_sec": 0, 00:30:11.259 "rw_mbytes_per_sec": 0, 00:30:11.259 "r_mbytes_per_sec": 0, 00:30:11.259 "w_mbytes_per_sec": 0 00:30:11.259 }, 00:30:11.259 "claimed": false, 00:30:11.259 "zoned": false, 00:30:11.259 "supported_io_types": { 00:30:11.259 "read": true, 00:30:11.259 "write": true, 00:30:11.259 "unmap": true, 00:30:11.259 "flush": true, 00:30:11.259 "reset": true, 00:30:11.259 "nvme_admin": false, 00:30:11.259 "nvme_io": false, 00:30:11.259 "nvme_io_md": false, 00:30:11.259 "write_zeroes": true, 00:30:11.259 "zcopy": true, 00:30:11.259 "get_zone_info": false, 00:30:11.259 "zone_management": false, 00:30:11.259 "zone_append": false, 00:30:11.259 "compare": false, 00:30:11.259 "compare_and_write": false, 00:30:11.259 "abort": true, 00:30:11.259 "seek_hole": false, 00:30:11.259 "seek_data": false, 00:30:11.259 "copy": true, 00:30:11.259 "nvme_iov_md": false 00:30:11.259 }, 00:30:11.259 "memory_domains": [ 00:30:11.259 { 00:30:11.259 "dma_device_id": "system", 00:30:11.259 "dma_device_type": 1 00:30:11.259 }, 00:30:11.259 { 00:30:11.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.259 "dma_device_type": 2 00:30:11.259 } 00:30:11.259 ], 00:30:11.259 "driver_specific": {} 00:30:11.259 } 00:30:11.259 ] 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.259 18:28:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.259 BaseBdev3 00:30:11.259 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.260 [ 00:30:11.260 { 00:30:11.260 "name": "BaseBdev3", 00:30:11.260 "aliases": [ 00:30:11.260 "661dcb92-8179-4397-9159-cbc40c008266" 00:30:11.260 ], 00:30:11.260 "product_name": "Malloc disk", 00:30:11.260 "block_size": 512, 00:30:11.260 "num_blocks": 65536, 00:30:11.260 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:11.260 "assigned_rate_limits": { 00:30:11.260 "rw_ios_per_sec": 0, 00:30:11.260 "rw_mbytes_per_sec": 0, 00:30:11.260 "r_mbytes_per_sec": 0, 00:30:11.260 "w_mbytes_per_sec": 0 00:30:11.260 }, 00:30:11.260 "claimed": false, 00:30:11.260 "zoned": false, 00:30:11.260 "supported_io_types": { 00:30:11.260 "read": true, 00:30:11.260 "write": true, 00:30:11.260 "unmap": true, 00:30:11.260 "flush": true, 00:30:11.260 "reset": true, 00:30:11.260 "nvme_admin": false, 00:30:11.260 "nvme_io": false, 00:30:11.260 "nvme_io_md": false, 00:30:11.260 "write_zeroes": true, 00:30:11.260 "zcopy": true, 00:30:11.260 "get_zone_info": false, 00:30:11.260 "zone_management": false, 00:30:11.260 "zone_append": false, 00:30:11.260 "compare": false, 00:30:11.260 "compare_and_write": false, 00:30:11.260 "abort": true, 00:30:11.260 "seek_hole": false, 00:30:11.260 "seek_data": false, 00:30:11.260 "copy": true, 00:30:11.260 "nvme_iov_md": false 00:30:11.260 }, 00:30:11.260 "memory_domains": [ 00:30:11.260 { 00:30:11.260 "dma_device_id": "system", 00:30:11.260 "dma_device_type": 1 00:30:11.260 }, 00:30:11.260 { 00:30:11.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.260 "dma_device_type": 2 00:30:11.260 } 00:30:11.260 ], 00:30:11.260 "driver_specific": {} 00:30:11.260 } 00:30:11.260 ] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.260 BaseBdev4 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.260 [ 00:30:11.260 { 00:30:11.260 "name": "BaseBdev4", 00:30:11.260 "aliases": [ 00:30:11.260 "f71c3f60-0829-4eb0-be61-ad3d70b28d9d" 00:30:11.260 ], 00:30:11.260 "product_name": "Malloc disk", 00:30:11.260 "block_size": 512, 00:30:11.260 "num_blocks": 65536, 00:30:11.260 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:11.260 "assigned_rate_limits": { 00:30:11.260 "rw_ios_per_sec": 0, 00:30:11.260 "rw_mbytes_per_sec": 0, 00:30:11.260 "r_mbytes_per_sec": 0, 00:30:11.260 "w_mbytes_per_sec": 0 00:30:11.260 }, 00:30:11.260 "claimed": false, 00:30:11.260 "zoned": false, 00:30:11.260 "supported_io_types": { 00:30:11.260 "read": true, 00:30:11.260 "write": true, 00:30:11.260 "unmap": true, 00:30:11.260 "flush": true, 00:30:11.260 "reset": true, 00:30:11.260 "nvme_admin": false, 00:30:11.260 "nvme_io": false, 00:30:11.260 "nvme_io_md": false, 00:30:11.260 "write_zeroes": true, 00:30:11.260 "zcopy": true, 00:30:11.260 "get_zone_info": false, 00:30:11.260 "zone_management": false, 00:30:11.260 "zone_append": false, 00:30:11.260 "compare": false, 00:30:11.260 "compare_and_write": false, 00:30:11.260 "abort": true, 00:30:11.260 "seek_hole": false, 00:30:11.260 "seek_data": false, 00:30:11.260 "copy": true, 00:30:11.260 "nvme_iov_md": false 00:30:11.260 }, 00:30:11.260 "memory_domains": [ 00:30:11.260 { 00:30:11.260 "dma_device_id": "system", 00:30:11.260 "dma_device_type": 1 00:30:11.260 }, 00:30:11.260 { 00:30:11.260 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:11.260 "dma_device_type": 2 00:30:11.260 } 00:30:11.260 ], 00:30:11.260 "driver_specific": {} 00:30:11.260 } 00:30:11.260 ] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.260 [2024-12-06 18:28:42.183513] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:11.260 [2024-12-06 18:28:42.183680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:11.260 [2024-12-06 18:28:42.183776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:11.260 [2024-12-06 18:28:42.185867] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:11.260 [2024-12-06 18:28:42.186040] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:11.260 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.520 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.520 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:11.520 "name": "Existed_Raid", 00:30:11.520 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:11.520 "strip_size_kb": 64, 00:30:11.520 "state": "configuring", 00:30:11.520 "raid_level": "concat", 00:30:11.520 "superblock": true, 00:30:11.520 "num_base_bdevs": 4, 00:30:11.520 "num_base_bdevs_discovered": 3, 00:30:11.520 "num_base_bdevs_operational": 4, 00:30:11.520 "base_bdevs_list": [ 00:30:11.520 { 00:30:11.520 "name": "BaseBdev1", 00:30:11.520 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:11.520 "is_configured": false, 00:30:11.520 "data_offset": 0, 00:30:11.520 "data_size": 0 00:30:11.520 }, 00:30:11.520 { 00:30:11.520 "name": "BaseBdev2", 00:30:11.520 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:11.520 "is_configured": true, 00:30:11.520 "data_offset": 2048, 00:30:11.520 "data_size": 63488 00:30:11.520 }, 00:30:11.520 { 00:30:11.520 "name": "BaseBdev3", 00:30:11.520 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:11.520 "is_configured": true, 00:30:11.520 "data_offset": 2048, 00:30:11.520 "data_size": 63488 00:30:11.520 }, 00:30:11.520 { 00:30:11.520 "name": "BaseBdev4", 00:30:11.520 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:11.520 "is_configured": true, 00:30:11.520 "data_offset": 2048, 00:30:11.520 "data_size": 63488 00:30:11.520 } 00:30:11.520 ] 00:30:11.520 }' 00:30:11.520 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:11.520 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.779 [2024-12-06 18:28:42.606954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:11.779 "name": "Existed_Raid", 00:30:11.779 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:11.779 "strip_size_kb": 64, 00:30:11.779 "state": "configuring", 00:30:11.779 "raid_level": "concat", 00:30:11.779 "superblock": true, 00:30:11.779 "num_base_bdevs": 4, 00:30:11.779 "num_base_bdevs_discovered": 2, 00:30:11.779 "num_base_bdevs_operational": 4, 00:30:11.779 "base_bdevs_list": [ 00:30:11.779 { 00:30:11.779 "name": "BaseBdev1", 00:30:11.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:11.779 "is_configured": false, 00:30:11.779 "data_offset": 0, 00:30:11.779 "data_size": 0 00:30:11.779 }, 00:30:11.779 { 00:30:11.779 "name": null, 00:30:11.779 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:11.779 "is_configured": false, 00:30:11.779 "data_offset": 0, 00:30:11.779 "data_size": 63488 00:30:11.779 }, 00:30:11.779 { 00:30:11.779 "name": "BaseBdev3", 00:30:11.779 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:11.779 "is_configured": true, 00:30:11.779 "data_offset": 2048, 00:30:11.779 "data_size": 63488 00:30:11.779 }, 00:30:11.779 { 00:30:11.779 "name": "BaseBdev4", 00:30:11.779 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:11.779 "is_configured": true, 00:30:11.779 "data_offset": 2048, 00:30:11.779 "data_size": 63488 00:30:11.779 } 00:30:11.779 ] 00:30:11.779 }' 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:11.779 18:28:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.348 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.348 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.348 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.348 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:12.348 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.348 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:12.348 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.349 [2024-12-06 18:28:43.141417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:12.349 BaseBdev1 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.349 [ 00:30:12.349 { 00:30:12.349 "name": "BaseBdev1", 00:30:12.349 "aliases": [ 00:30:12.349 "91a853e4-419c-489c-9605-212fdd45042b" 00:30:12.349 ], 00:30:12.349 "product_name": "Malloc disk", 00:30:12.349 "block_size": 512, 00:30:12.349 "num_blocks": 65536, 00:30:12.349 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:12.349 "assigned_rate_limits": { 00:30:12.349 "rw_ios_per_sec": 0, 00:30:12.349 "rw_mbytes_per_sec": 0, 00:30:12.349 "r_mbytes_per_sec": 0, 00:30:12.349 "w_mbytes_per_sec": 0 00:30:12.349 }, 00:30:12.349 "claimed": true, 00:30:12.349 "claim_type": "exclusive_write", 00:30:12.349 "zoned": false, 00:30:12.349 "supported_io_types": { 00:30:12.349 "read": true, 00:30:12.349 "write": true, 00:30:12.349 "unmap": true, 00:30:12.349 "flush": true, 00:30:12.349 "reset": true, 00:30:12.349 "nvme_admin": false, 00:30:12.349 "nvme_io": false, 00:30:12.349 "nvme_io_md": false, 00:30:12.349 "write_zeroes": true, 00:30:12.349 "zcopy": true, 00:30:12.349 "get_zone_info": false, 00:30:12.349 "zone_management": false, 00:30:12.349 "zone_append": false, 00:30:12.349 "compare": false, 00:30:12.349 "compare_and_write": false, 00:30:12.349 "abort": true, 00:30:12.349 "seek_hole": false, 00:30:12.349 "seek_data": false, 00:30:12.349 "copy": true, 00:30:12.349 "nvme_iov_md": false 00:30:12.349 }, 00:30:12.349 "memory_domains": [ 00:30:12.349 { 00:30:12.349 "dma_device_id": "system", 00:30:12.349 "dma_device_type": 1 00:30:12.349 }, 00:30:12.349 { 00:30:12.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:12.349 "dma_device_type": 2 00:30:12.349 } 00:30:12.349 ], 00:30:12.349 "driver_specific": {} 00:30:12.349 } 00:30:12.349 ] 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.349 "name": "Existed_Raid", 00:30:12.349 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:12.349 "strip_size_kb": 64, 00:30:12.349 "state": "configuring", 00:30:12.349 "raid_level": "concat", 00:30:12.349 "superblock": true, 00:30:12.349 "num_base_bdevs": 4, 00:30:12.349 "num_base_bdevs_discovered": 3, 00:30:12.349 "num_base_bdevs_operational": 4, 00:30:12.349 "base_bdevs_list": [ 00:30:12.349 { 00:30:12.349 "name": "BaseBdev1", 00:30:12.349 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:12.349 "is_configured": true, 00:30:12.349 "data_offset": 2048, 00:30:12.349 "data_size": 63488 00:30:12.349 }, 00:30:12.349 { 00:30:12.349 "name": null, 00:30:12.349 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:12.349 "is_configured": false, 00:30:12.349 "data_offset": 0, 00:30:12.349 "data_size": 63488 00:30:12.349 }, 00:30:12.349 { 00:30:12.349 "name": "BaseBdev3", 00:30:12.349 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:12.349 "is_configured": true, 00:30:12.349 "data_offset": 2048, 00:30:12.349 "data_size": 63488 00:30:12.349 }, 00:30:12.349 { 00:30:12.349 "name": "BaseBdev4", 00:30:12.349 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:12.349 "is_configured": true, 00:30:12.349 "data_offset": 2048, 00:30:12.349 "data_size": 63488 00:30:12.349 } 00:30:12.349 ] 00:30:12.349 }' 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.349 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.919 [2024-12-06 18:28:43.632936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:12.919 "name": "Existed_Raid", 00:30:12.919 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:12.919 "strip_size_kb": 64, 00:30:12.919 "state": "configuring", 00:30:12.919 "raid_level": "concat", 00:30:12.919 "superblock": true, 00:30:12.919 "num_base_bdevs": 4, 00:30:12.919 "num_base_bdevs_discovered": 2, 00:30:12.919 "num_base_bdevs_operational": 4, 00:30:12.919 "base_bdevs_list": [ 00:30:12.919 { 00:30:12.919 "name": "BaseBdev1", 00:30:12.919 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:12.919 "is_configured": true, 00:30:12.919 "data_offset": 2048, 00:30:12.919 "data_size": 63488 00:30:12.919 }, 00:30:12.919 { 00:30:12.919 "name": null, 00:30:12.919 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:12.919 "is_configured": false, 00:30:12.919 "data_offset": 0, 00:30:12.919 "data_size": 63488 00:30:12.919 }, 00:30:12.919 { 00:30:12.919 "name": null, 00:30:12.919 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:12.919 "is_configured": false, 00:30:12.919 "data_offset": 0, 00:30:12.919 "data_size": 63488 00:30:12.919 }, 00:30:12.919 { 00:30:12.919 "name": "BaseBdev4", 00:30:12.919 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:12.919 "is_configured": true, 00:30:12.919 "data_offset": 2048, 00:30:12.919 "data_size": 63488 00:30:12.919 } 00:30:12.919 ] 00:30:12.919 }' 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:12.919 18:28:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.179 [2024-12-06 18:28:44.092284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.179 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.439 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.439 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:13.439 "name": "Existed_Raid", 00:30:13.439 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:13.439 "strip_size_kb": 64, 00:30:13.439 "state": "configuring", 00:30:13.439 "raid_level": "concat", 00:30:13.439 "superblock": true, 00:30:13.439 "num_base_bdevs": 4, 00:30:13.439 "num_base_bdevs_discovered": 3, 00:30:13.439 "num_base_bdevs_operational": 4, 00:30:13.439 "base_bdevs_list": [ 00:30:13.439 { 00:30:13.439 "name": "BaseBdev1", 00:30:13.439 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:13.439 "is_configured": true, 00:30:13.439 "data_offset": 2048, 00:30:13.439 "data_size": 63488 00:30:13.439 }, 00:30:13.439 { 00:30:13.439 "name": null, 00:30:13.439 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:13.439 "is_configured": false, 00:30:13.439 "data_offset": 0, 00:30:13.439 "data_size": 63488 00:30:13.439 }, 00:30:13.439 { 00:30:13.439 "name": "BaseBdev3", 00:30:13.439 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:13.439 "is_configured": true, 00:30:13.439 "data_offset": 2048, 00:30:13.439 "data_size": 63488 00:30:13.439 }, 00:30:13.439 { 00:30:13.439 "name": "BaseBdev4", 00:30:13.439 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:13.439 "is_configured": true, 00:30:13.439 "data_offset": 2048, 00:30:13.439 "data_size": 63488 00:30:13.439 } 00:30:13.439 ] 00:30:13.439 }' 00:30:13.439 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:13.439 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.698 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:13.699 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.699 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.699 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.699 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.699 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:13.699 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:13.699 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.699 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.699 [2024-12-06 18:28:44.563673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:13.958 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:13.958 "name": "Existed_Raid", 00:30:13.958 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:13.958 "strip_size_kb": 64, 00:30:13.958 "state": "configuring", 00:30:13.958 "raid_level": "concat", 00:30:13.958 "superblock": true, 00:30:13.958 "num_base_bdevs": 4, 00:30:13.958 "num_base_bdevs_discovered": 2, 00:30:13.958 "num_base_bdevs_operational": 4, 00:30:13.958 "base_bdevs_list": [ 00:30:13.958 { 00:30:13.958 "name": null, 00:30:13.958 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:13.958 "is_configured": false, 00:30:13.958 "data_offset": 0, 00:30:13.958 "data_size": 63488 00:30:13.958 }, 00:30:13.958 { 00:30:13.958 "name": null, 00:30:13.958 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:13.958 "is_configured": false, 00:30:13.958 "data_offset": 0, 00:30:13.958 "data_size": 63488 00:30:13.958 }, 00:30:13.958 { 00:30:13.958 "name": "BaseBdev3", 00:30:13.958 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:13.958 "is_configured": true, 00:30:13.958 "data_offset": 2048, 00:30:13.958 "data_size": 63488 00:30:13.958 }, 00:30:13.958 { 00:30:13.958 "name": "BaseBdev4", 00:30:13.958 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:13.958 "is_configured": true, 00:30:13.958 "data_offset": 2048, 00:30:13.958 "data_size": 63488 00:30:13.958 } 00:30:13.958 ] 00:30:13.959 }' 00:30:13.959 18:28:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:13.959 18:28:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.218 [2024-12-06 18:28:45.111514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.218 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.477 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:14.477 "name": "Existed_Raid", 00:30:14.477 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:14.477 "strip_size_kb": 64, 00:30:14.477 "state": "configuring", 00:30:14.477 "raid_level": "concat", 00:30:14.477 "superblock": true, 00:30:14.477 "num_base_bdevs": 4, 00:30:14.477 "num_base_bdevs_discovered": 3, 00:30:14.477 "num_base_bdevs_operational": 4, 00:30:14.477 "base_bdevs_list": [ 00:30:14.477 { 00:30:14.477 "name": null, 00:30:14.477 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:14.477 "is_configured": false, 00:30:14.477 "data_offset": 0, 00:30:14.477 "data_size": 63488 00:30:14.477 }, 00:30:14.477 { 00:30:14.477 "name": "BaseBdev2", 00:30:14.477 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:14.477 "is_configured": true, 00:30:14.477 "data_offset": 2048, 00:30:14.477 "data_size": 63488 00:30:14.477 }, 00:30:14.477 { 00:30:14.477 "name": "BaseBdev3", 00:30:14.477 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:14.477 "is_configured": true, 00:30:14.477 "data_offset": 2048, 00:30:14.477 "data_size": 63488 00:30:14.477 }, 00:30:14.477 { 00:30:14.477 "name": "BaseBdev4", 00:30:14.477 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:14.477 "is_configured": true, 00:30:14.477 "data_offset": 2048, 00:30:14.477 "data_size": 63488 00:30:14.477 } 00:30:14.477 ] 00:30:14.477 }' 00:30:14.477 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:14.477 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 91a853e4-419c-489c-9605-212fdd45042b 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.735 [2024-12-06 18:28:45.676695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:14.735 [2024-12-06 18:28:45.676922] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:14.735 [2024-12-06 18:28:45.676936] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:30:14.735 [2024-12-06 18:28:45.677230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:14.735 [2024-12-06 18:28:45.677374] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:14.735 [2024-12-06 18:28:45.677388] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:14.735 NewBaseBdev 00:30:14.735 [2024-12-06 18:28:45.677523] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.735 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.993 [ 00:30:14.993 { 00:30:14.993 "name": "NewBaseBdev", 00:30:14.993 "aliases": [ 00:30:14.993 "91a853e4-419c-489c-9605-212fdd45042b" 00:30:14.993 ], 00:30:14.993 "product_name": "Malloc disk", 00:30:14.993 "block_size": 512, 00:30:14.993 "num_blocks": 65536, 00:30:14.993 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:14.993 "assigned_rate_limits": { 00:30:14.993 "rw_ios_per_sec": 0, 00:30:14.993 "rw_mbytes_per_sec": 0, 00:30:14.993 "r_mbytes_per_sec": 0, 00:30:14.993 "w_mbytes_per_sec": 0 00:30:14.993 }, 00:30:14.993 "claimed": true, 00:30:14.993 "claim_type": "exclusive_write", 00:30:14.993 "zoned": false, 00:30:14.993 "supported_io_types": { 00:30:14.993 "read": true, 00:30:14.993 "write": true, 00:30:14.993 "unmap": true, 00:30:14.993 "flush": true, 00:30:14.993 "reset": true, 00:30:14.993 "nvme_admin": false, 00:30:14.993 "nvme_io": false, 00:30:14.993 "nvme_io_md": false, 00:30:14.993 "write_zeroes": true, 00:30:14.993 "zcopy": true, 00:30:14.993 "get_zone_info": false, 00:30:14.993 "zone_management": false, 00:30:14.993 "zone_append": false, 00:30:14.993 "compare": false, 00:30:14.993 "compare_and_write": false, 00:30:14.993 "abort": true, 00:30:14.993 "seek_hole": false, 00:30:14.993 "seek_data": false, 00:30:14.993 "copy": true, 00:30:14.993 "nvme_iov_md": false 00:30:14.993 }, 00:30:14.993 "memory_domains": [ 00:30:14.993 { 00:30:14.993 "dma_device_id": "system", 00:30:14.993 "dma_device_type": 1 00:30:14.993 }, 00:30:14.993 { 00:30:14.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:14.993 "dma_device_type": 2 00:30:14.993 } 00:30:14.993 ], 00:30:14.993 "driver_specific": {} 00:30:14.993 } 00:30:14.993 ] 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:14.993 "name": "Existed_Raid", 00:30:14.993 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:14.993 "strip_size_kb": 64, 00:30:14.993 "state": "online", 00:30:14.993 "raid_level": "concat", 00:30:14.993 "superblock": true, 00:30:14.993 "num_base_bdevs": 4, 00:30:14.993 "num_base_bdevs_discovered": 4, 00:30:14.993 "num_base_bdevs_operational": 4, 00:30:14.993 "base_bdevs_list": [ 00:30:14.993 { 00:30:14.993 "name": "NewBaseBdev", 00:30:14.993 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:14.993 "is_configured": true, 00:30:14.993 "data_offset": 2048, 00:30:14.993 "data_size": 63488 00:30:14.993 }, 00:30:14.993 { 00:30:14.993 "name": "BaseBdev2", 00:30:14.993 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:14.993 "is_configured": true, 00:30:14.993 "data_offset": 2048, 00:30:14.993 "data_size": 63488 00:30:14.993 }, 00:30:14.993 { 00:30:14.993 "name": "BaseBdev3", 00:30:14.993 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:14.993 "is_configured": true, 00:30:14.993 "data_offset": 2048, 00:30:14.993 "data_size": 63488 00:30:14.993 }, 00:30:14.993 { 00:30:14.993 "name": "BaseBdev4", 00:30:14.993 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:14.993 "is_configured": true, 00:30:14.993 "data_offset": 2048, 00:30:14.993 "data_size": 63488 00:30:14.993 } 00:30:14.993 ] 00:30:14.993 }' 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:14.993 18:28:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.252 [2024-12-06 18:28:46.136543] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:15.252 "name": "Existed_Raid", 00:30:15.252 "aliases": [ 00:30:15.252 "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab" 00:30:15.252 ], 00:30:15.252 "product_name": "Raid Volume", 00:30:15.252 "block_size": 512, 00:30:15.252 "num_blocks": 253952, 00:30:15.252 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:15.252 "assigned_rate_limits": { 00:30:15.252 "rw_ios_per_sec": 0, 00:30:15.252 "rw_mbytes_per_sec": 0, 00:30:15.252 "r_mbytes_per_sec": 0, 00:30:15.252 "w_mbytes_per_sec": 0 00:30:15.252 }, 00:30:15.252 "claimed": false, 00:30:15.252 "zoned": false, 00:30:15.252 "supported_io_types": { 00:30:15.252 "read": true, 00:30:15.252 "write": true, 00:30:15.252 "unmap": true, 00:30:15.252 "flush": true, 00:30:15.252 "reset": true, 00:30:15.252 "nvme_admin": false, 00:30:15.252 "nvme_io": false, 00:30:15.252 "nvme_io_md": false, 00:30:15.252 "write_zeroes": true, 00:30:15.252 "zcopy": false, 00:30:15.252 "get_zone_info": false, 00:30:15.252 "zone_management": false, 00:30:15.252 "zone_append": false, 00:30:15.252 "compare": false, 00:30:15.252 "compare_and_write": false, 00:30:15.252 "abort": false, 00:30:15.252 "seek_hole": false, 00:30:15.252 "seek_data": false, 00:30:15.252 "copy": false, 00:30:15.252 "nvme_iov_md": false 00:30:15.252 }, 00:30:15.252 "memory_domains": [ 00:30:15.252 { 00:30:15.252 "dma_device_id": "system", 00:30:15.252 "dma_device_type": 1 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:15.252 "dma_device_type": 2 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "dma_device_id": "system", 00:30:15.252 "dma_device_type": 1 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:15.252 "dma_device_type": 2 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "dma_device_id": "system", 00:30:15.252 "dma_device_type": 1 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:15.252 "dma_device_type": 2 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "dma_device_id": "system", 00:30:15.252 "dma_device_type": 1 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:15.252 "dma_device_type": 2 00:30:15.252 } 00:30:15.252 ], 00:30:15.252 "driver_specific": { 00:30:15.252 "raid": { 00:30:15.252 "uuid": "d9ae4d6d-4273-4e8e-9f0d-c924a92108ab", 00:30:15.252 "strip_size_kb": 64, 00:30:15.252 "state": "online", 00:30:15.252 "raid_level": "concat", 00:30:15.252 "superblock": true, 00:30:15.252 "num_base_bdevs": 4, 00:30:15.252 "num_base_bdevs_discovered": 4, 00:30:15.252 "num_base_bdevs_operational": 4, 00:30:15.252 "base_bdevs_list": [ 00:30:15.252 { 00:30:15.252 "name": "NewBaseBdev", 00:30:15.252 "uuid": "91a853e4-419c-489c-9605-212fdd45042b", 00:30:15.252 "is_configured": true, 00:30:15.252 "data_offset": 2048, 00:30:15.252 "data_size": 63488 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "name": "BaseBdev2", 00:30:15.252 "uuid": "379b02bb-45e0-4c35-93c4-11509ce89b35", 00:30:15.252 "is_configured": true, 00:30:15.252 "data_offset": 2048, 00:30:15.252 "data_size": 63488 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "name": "BaseBdev3", 00:30:15.252 "uuid": "661dcb92-8179-4397-9159-cbc40c008266", 00:30:15.252 "is_configured": true, 00:30:15.252 "data_offset": 2048, 00:30:15.252 "data_size": 63488 00:30:15.252 }, 00:30:15.252 { 00:30:15.252 "name": "BaseBdev4", 00:30:15.252 "uuid": "f71c3f60-0829-4eb0-be61-ad3d70b28d9d", 00:30:15.252 "is_configured": true, 00:30:15.252 "data_offset": 2048, 00:30:15.252 "data_size": 63488 00:30:15.252 } 00:30:15.252 ] 00:30:15.252 } 00:30:15.252 } 00:30:15.252 }' 00:30:15.252 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:15.510 BaseBdev2 00:30:15.510 BaseBdev3 00:30:15.510 BaseBdev4' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:15.510 [2024-12-06 18:28:46.447735] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:15.510 [2024-12-06 18:28:46.447767] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:15.510 [2024-12-06 18:28:46.447833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:15.510 [2024-12-06 18:28:46.447902] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:15.510 [2024-12-06 18:28:46.447914] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71679 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71679 ']' 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71679 00:30:15.510 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:15.768 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:15.768 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71679 00:30:15.768 killing process with pid 71679 00:30:15.768 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:15.768 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:15.768 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71679' 00:30:15.768 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71679 00:30:15.768 [2024-12-06 18:28:46.494943] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:15.768 18:28:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71679 00:30:16.026 [2024-12-06 18:28:46.904986] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:17.405 18:28:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:30:17.405 00:30:17.405 real 0m11.496s 00:30:17.405 user 0m18.152s 00:30:17.405 sys 0m2.396s 00:30:17.405 18:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:17.405 18:28:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:17.405 ************************************ 00:30:17.405 END TEST raid_state_function_test_sb 00:30:17.405 ************************************ 00:30:17.405 18:28:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:30:17.405 18:28:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:17.405 18:28:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:17.405 18:28:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:17.405 ************************************ 00:30:17.405 START TEST raid_superblock_test 00:30:17.405 ************************************ 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72348 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72348 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72348 ']' 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:17.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:17.405 18:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:17.405 [2024-12-06 18:28:48.243706] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:17.405 [2024-12-06 18:28:48.243998] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72348 ] 00:30:17.665 [2024-12-06 18:28:48.425387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:17.665 [2024-12-06 18:28:48.538618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:17.924 [2024-12-06 18:28:48.746243] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:17.924 [2024-12-06 18:28:48.746280] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.182 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.442 malloc1 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.442 [2024-12-06 18:28:49.169198] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:18.442 [2024-12-06 18:28:49.169402] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.442 [2024-12-06 18:28:49.169467] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:18.442 [2024-12-06 18:28:49.169564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.442 [2024-12-06 18:28:49.172289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.442 [2024-12-06 18:28:49.172453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:18.442 pt1 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.442 malloc2 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.442 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.442 [2024-12-06 18:28:49.229602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:18.442 [2024-12-06 18:28:49.229807] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.442 [2024-12-06 18:28:49.229848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:18.442 [2024-12-06 18:28:49.229861] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.443 [2024-12-06 18:28:49.232583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.443 [2024-12-06 18:28:49.232622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:18.443 pt2 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.443 malloc3 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.443 [2024-12-06 18:28:49.293657] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:18.443 [2024-12-06 18:28:49.293869] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.443 [2024-12-06 18:28:49.293937] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:18.443 [2024-12-06 18:28:49.294020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.443 [2024-12-06 18:28:49.296937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.443 [2024-12-06 18:28:49.297082] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:18.443 pt3 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.443 malloc4 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.443 [2024-12-06 18:28:49.349311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:18.443 [2024-12-06 18:28:49.349475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:18.443 [2024-12-06 18:28:49.349531] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:18.443 [2024-12-06 18:28:49.349651] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:18.443 [2024-12-06 18:28:49.352110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:18.443 [2024-12-06 18:28:49.352288] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:18.443 pt4 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.443 [2024-12-06 18:28:49.361337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:18.443 [2024-12-06 18:28:49.363417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:18.443 [2024-12-06 18:28:49.363505] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:18.443 [2024-12-06 18:28:49.363548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:18.443 [2024-12-06 18:28:49.363722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:18.443 [2024-12-06 18:28:49.363735] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:30:18.443 [2024-12-06 18:28:49.363993] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:18.443 [2024-12-06 18:28:49.364165] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:18.443 [2024-12-06 18:28:49.364180] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:18.443 [2024-12-06 18:28:49.364313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.443 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.723 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.723 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:18.723 "name": "raid_bdev1", 00:30:18.723 "uuid": "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700", 00:30:18.723 "strip_size_kb": 64, 00:30:18.723 "state": "online", 00:30:18.723 "raid_level": "concat", 00:30:18.723 "superblock": true, 00:30:18.723 "num_base_bdevs": 4, 00:30:18.723 "num_base_bdevs_discovered": 4, 00:30:18.723 "num_base_bdevs_operational": 4, 00:30:18.723 "base_bdevs_list": [ 00:30:18.723 { 00:30:18.723 "name": "pt1", 00:30:18.723 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:18.723 "is_configured": true, 00:30:18.723 "data_offset": 2048, 00:30:18.723 "data_size": 63488 00:30:18.723 }, 00:30:18.723 { 00:30:18.723 "name": "pt2", 00:30:18.723 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:18.723 "is_configured": true, 00:30:18.723 "data_offset": 2048, 00:30:18.723 "data_size": 63488 00:30:18.723 }, 00:30:18.723 { 00:30:18.723 "name": "pt3", 00:30:18.723 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:18.723 "is_configured": true, 00:30:18.723 "data_offset": 2048, 00:30:18.723 "data_size": 63488 00:30:18.723 }, 00:30:18.723 { 00:30:18.723 "name": "pt4", 00:30:18.723 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:18.723 "is_configured": true, 00:30:18.723 "data_offset": 2048, 00:30:18.723 "data_size": 63488 00:30:18.723 } 00:30:18.723 ] 00:30:18.723 }' 00:30:18.723 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:18.723 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.983 [2024-12-06 18:28:49.773129] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:18.983 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:18.983 "name": "raid_bdev1", 00:30:18.983 "aliases": [ 00:30:18.983 "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700" 00:30:18.983 ], 00:30:18.983 "product_name": "Raid Volume", 00:30:18.983 "block_size": 512, 00:30:18.983 "num_blocks": 253952, 00:30:18.983 "uuid": "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700", 00:30:18.983 "assigned_rate_limits": { 00:30:18.983 "rw_ios_per_sec": 0, 00:30:18.983 "rw_mbytes_per_sec": 0, 00:30:18.983 "r_mbytes_per_sec": 0, 00:30:18.983 "w_mbytes_per_sec": 0 00:30:18.983 }, 00:30:18.983 "claimed": false, 00:30:18.983 "zoned": false, 00:30:18.983 "supported_io_types": { 00:30:18.983 "read": true, 00:30:18.983 "write": true, 00:30:18.983 "unmap": true, 00:30:18.983 "flush": true, 00:30:18.983 "reset": true, 00:30:18.983 "nvme_admin": false, 00:30:18.983 "nvme_io": false, 00:30:18.983 "nvme_io_md": false, 00:30:18.983 "write_zeroes": true, 00:30:18.983 "zcopy": false, 00:30:18.983 "get_zone_info": false, 00:30:18.983 "zone_management": false, 00:30:18.983 "zone_append": false, 00:30:18.983 "compare": false, 00:30:18.983 "compare_and_write": false, 00:30:18.983 "abort": false, 00:30:18.983 "seek_hole": false, 00:30:18.983 "seek_data": false, 00:30:18.983 "copy": false, 00:30:18.983 "nvme_iov_md": false 00:30:18.983 }, 00:30:18.983 "memory_domains": [ 00:30:18.983 { 00:30:18.983 "dma_device_id": "system", 00:30:18.983 "dma_device_type": 1 00:30:18.983 }, 00:30:18.983 { 00:30:18.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.983 "dma_device_type": 2 00:30:18.983 }, 00:30:18.983 { 00:30:18.983 "dma_device_id": "system", 00:30:18.983 "dma_device_type": 1 00:30:18.983 }, 00:30:18.983 { 00:30:18.983 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.984 "dma_device_type": 2 00:30:18.984 }, 00:30:18.984 { 00:30:18.984 "dma_device_id": "system", 00:30:18.984 "dma_device_type": 1 00:30:18.984 }, 00:30:18.984 { 00:30:18.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.984 "dma_device_type": 2 00:30:18.984 }, 00:30:18.984 { 00:30:18.984 "dma_device_id": "system", 00:30:18.984 "dma_device_type": 1 00:30:18.984 }, 00:30:18.984 { 00:30:18.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:18.984 "dma_device_type": 2 00:30:18.984 } 00:30:18.984 ], 00:30:18.984 "driver_specific": { 00:30:18.984 "raid": { 00:30:18.984 "uuid": "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700", 00:30:18.984 "strip_size_kb": 64, 00:30:18.984 "state": "online", 00:30:18.984 "raid_level": "concat", 00:30:18.984 "superblock": true, 00:30:18.984 "num_base_bdevs": 4, 00:30:18.984 "num_base_bdevs_discovered": 4, 00:30:18.984 "num_base_bdevs_operational": 4, 00:30:18.984 "base_bdevs_list": [ 00:30:18.984 { 00:30:18.984 "name": "pt1", 00:30:18.984 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:18.984 "is_configured": true, 00:30:18.984 "data_offset": 2048, 00:30:18.984 "data_size": 63488 00:30:18.984 }, 00:30:18.984 { 00:30:18.984 "name": "pt2", 00:30:18.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:18.984 "is_configured": true, 00:30:18.984 "data_offset": 2048, 00:30:18.984 "data_size": 63488 00:30:18.984 }, 00:30:18.984 { 00:30:18.984 "name": "pt3", 00:30:18.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:18.984 "is_configured": true, 00:30:18.984 "data_offset": 2048, 00:30:18.984 "data_size": 63488 00:30:18.984 }, 00:30:18.984 { 00:30:18.984 "name": "pt4", 00:30:18.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:18.984 "is_configured": true, 00:30:18.984 "data_offset": 2048, 00:30:18.984 "data_size": 63488 00:30:18.984 } 00:30:18.984 ] 00:30:18.984 } 00:30:18.984 } 00:30:18.984 }' 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:18.984 pt2 00:30:18.984 pt3 00:30:18.984 pt4' 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:18.984 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.243 18:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.243 [2024-12-06 18:28:50.096580] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700 ']' 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.243 [2024-12-06 18:28:50.144266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:19.243 [2024-12-06 18:28:50.144394] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:19.243 [2024-12-06 18:28:50.144548] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:19.243 [2024-12-06 18:28:50.144650] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:19.243 [2024-12-06 18:28:50.144775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:19.243 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.502 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.503 [2024-12-06 18:28:50.316062] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:19.503 [2024-12-06 18:28:50.318363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:19.503 [2024-12-06 18:28:50.318550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:19.503 [2024-12-06 18:28:50.318600] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:30:19.503 [2024-12-06 18:28:50.318665] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:19.503 [2024-12-06 18:28:50.318741] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:19.503 [2024-12-06 18:28:50.318764] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:30:19.503 [2024-12-06 18:28:50.318804] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:30:19.503 [2024-12-06 18:28:50.318821] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:19.503 [2024-12-06 18:28:50.318835] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:30:19.503 request: 00:30:19.503 { 00:30:19.503 "name": "raid_bdev1", 00:30:19.503 "raid_level": "concat", 00:30:19.503 "base_bdevs": [ 00:30:19.503 "malloc1", 00:30:19.503 "malloc2", 00:30:19.503 "malloc3", 00:30:19.503 "malloc4" 00:30:19.503 ], 00:30:19.503 "strip_size_kb": 64, 00:30:19.503 "superblock": false, 00:30:19.503 "method": "bdev_raid_create", 00:30:19.503 "req_id": 1 00:30:19.503 } 00:30:19.503 Got JSON-RPC error response 00:30:19.503 response: 00:30:19.503 { 00:30:19.503 "code": -17, 00:30:19.503 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:19.503 } 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.503 [2024-12-06 18:28:50.387915] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:19.503 [2024-12-06 18:28:50.387983] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:19.503 [2024-12-06 18:28:50.388006] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:19.503 [2024-12-06 18:28:50.388020] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:19.503 [2024-12-06 18:28:50.390667] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:19.503 [2024-12-06 18:28:50.390842] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:19.503 [2024-12-06 18:28:50.390954] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:19.503 [2024-12-06 18:28:50.391026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:19.503 pt1 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:19.503 "name": "raid_bdev1", 00:30:19.503 "uuid": "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700", 00:30:19.503 "strip_size_kb": 64, 00:30:19.503 "state": "configuring", 00:30:19.503 "raid_level": "concat", 00:30:19.503 "superblock": true, 00:30:19.503 "num_base_bdevs": 4, 00:30:19.503 "num_base_bdevs_discovered": 1, 00:30:19.503 "num_base_bdevs_operational": 4, 00:30:19.503 "base_bdevs_list": [ 00:30:19.503 { 00:30:19.503 "name": "pt1", 00:30:19.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:19.503 "is_configured": true, 00:30:19.503 "data_offset": 2048, 00:30:19.503 "data_size": 63488 00:30:19.503 }, 00:30:19.503 { 00:30:19.503 "name": null, 00:30:19.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:19.503 "is_configured": false, 00:30:19.503 "data_offset": 2048, 00:30:19.503 "data_size": 63488 00:30:19.503 }, 00:30:19.503 { 00:30:19.503 "name": null, 00:30:19.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:19.503 "is_configured": false, 00:30:19.503 "data_offset": 2048, 00:30:19.503 "data_size": 63488 00:30:19.503 }, 00:30:19.503 { 00:30:19.503 "name": null, 00:30:19.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:19.503 "is_configured": false, 00:30:19.503 "data_offset": 2048, 00:30:19.503 "data_size": 63488 00:30:19.503 } 00:30:19.503 ] 00:30:19.503 }' 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:19.503 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.071 [2024-12-06 18:28:50.827740] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:20.071 [2024-12-06 18:28:50.827944] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.071 [2024-12-06 18:28:50.828000] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:20.071 [2024-12-06 18:28:50.828082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.071 [2024-12-06 18:28:50.828571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.071 [2024-12-06 18:28:50.828744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:20.071 [2024-12-06 18:28:50.828937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:20.071 [2024-12-06 18:28:50.829048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:20.071 pt2 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.071 [2024-12-06 18:28:50.839729] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:20.071 "name": "raid_bdev1", 00:30:20.071 "uuid": "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700", 00:30:20.071 "strip_size_kb": 64, 00:30:20.071 "state": "configuring", 00:30:20.071 "raid_level": "concat", 00:30:20.071 "superblock": true, 00:30:20.071 "num_base_bdevs": 4, 00:30:20.071 "num_base_bdevs_discovered": 1, 00:30:20.071 "num_base_bdevs_operational": 4, 00:30:20.071 "base_bdevs_list": [ 00:30:20.071 { 00:30:20.071 "name": "pt1", 00:30:20.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:20.071 "is_configured": true, 00:30:20.071 "data_offset": 2048, 00:30:20.071 "data_size": 63488 00:30:20.071 }, 00:30:20.071 { 00:30:20.071 "name": null, 00:30:20.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:20.071 "is_configured": false, 00:30:20.071 "data_offset": 0, 00:30:20.071 "data_size": 63488 00:30:20.071 }, 00:30:20.071 { 00:30:20.071 "name": null, 00:30:20.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:20.071 "is_configured": false, 00:30:20.071 "data_offset": 2048, 00:30:20.071 "data_size": 63488 00:30:20.071 }, 00:30:20.071 { 00:30:20.071 "name": null, 00:30:20.071 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:20.071 "is_configured": false, 00:30:20.071 "data_offset": 2048, 00:30:20.071 "data_size": 63488 00:30:20.071 } 00:30:20.071 ] 00:30:20.071 }' 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:20.071 18:28:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.330 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:20.330 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:20.330 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:20.330 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.330 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.590 [2024-12-06 18:28:51.279111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:20.590 [2024-12-06 18:28:51.279198] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.590 [2024-12-06 18:28:51.279223] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:30:20.590 [2024-12-06 18:28:51.279235] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.590 [2024-12-06 18:28:51.279681] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.590 [2024-12-06 18:28:51.279700] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:20.590 [2024-12-06 18:28:51.279779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:20.590 [2024-12-06 18:28:51.279801] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:20.590 pt2 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.590 [2024-12-06 18:28:51.291068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:20.590 [2024-12-06 18:28:51.291124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.590 [2024-12-06 18:28:51.291156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:20.590 [2024-12-06 18:28:51.291168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.590 [2024-12-06 18:28:51.291545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.590 [2024-12-06 18:28:51.291568] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:20.590 [2024-12-06 18:28:51.291630] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:20.590 [2024-12-06 18:28:51.291655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:20.590 pt3 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.590 [2024-12-06 18:28:51.303025] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:20.590 [2024-12-06 18:28:51.303073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:20.590 [2024-12-06 18:28:51.303092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:30:20.590 [2024-12-06 18:28:51.303102] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:20.590 [2024-12-06 18:28:51.303485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:20.590 [2024-12-06 18:28:51.303504] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:20.590 [2024-12-06 18:28:51.303564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:20.590 [2024-12-06 18:28:51.303593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:20.590 [2024-12-06 18:28:51.303728] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:20.590 [2024-12-06 18:28:51.303737] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:30:20.590 [2024-12-06 18:28:51.303971] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:20.590 [2024-12-06 18:28:51.304097] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:20.590 [2024-12-06 18:28:51.304111] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:30:20.590 [2024-12-06 18:28:51.304249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:20.590 pt4 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:20.590 "name": "raid_bdev1", 00:30:20.590 "uuid": "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700", 00:30:20.590 "strip_size_kb": 64, 00:30:20.590 "state": "online", 00:30:20.590 "raid_level": "concat", 00:30:20.590 "superblock": true, 00:30:20.590 "num_base_bdevs": 4, 00:30:20.590 "num_base_bdevs_discovered": 4, 00:30:20.590 "num_base_bdevs_operational": 4, 00:30:20.590 "base_bdevs_list": [ 00:30:20.590 { 00:30:20.590 "name": "pt1", 00:30:20.590 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:20.590 "is_configured": true, 00:30:20.590 "data_offset": 2048, 00:30:20.590 "data_size": 63488 00:30:20.590 }, 00:30:20.590 { 00:30:20.590 "name": "pt2", 00:30:20.590 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:20.590 "is_configured": true, 00:30:20.590 "data_offset": 2048, 00:30:20.590 "data_size": 63488 00:30:20.590 }, 00:30:20.590 { 00:30:20.590 "name": "pt3", 00:30:20.590 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:20.590 "is_configured": true, 00:30:20.590 "data_offset": 2048, 00:30:20.590 "data_size": 63488 00:30:20.590 }, 00:30:20.590 { 00:30:20.590 "name": "pt4", 00:30:20.590 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:20.590 "is_configured": true, 00:30:20.590 "data_offset": 2048, 00:30:20.590 "data_size": 63488 00:30:20.590 } 00:30:20.590 ] 00:30:20.590 }' 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:20.590 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:20.850 [2024-12-06 18:28:51.734837] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:20.850 "name": "raid_bdev1", 00:30:20.850 "aliases": [ 00:30:20.850 "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700" 00:30:20.850 ], 00:30:20.850 "product_name": "Raid Volume", 00:30:20.850 "block_size": 512, 00:30:20.850 "num_blocks": 253952, 00:30:20.850 "uuid": "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700", 00:30:20.850 "assigned_rate_limits": { 00:30:20.850 "rw_ios_per_sec": 0, 00:30:20.850 "rw_mbytes_per_sec": 0, 00:30:20.850 "r_mbytes_per_sec": 0, 00:30:20.850 "w_mbytes_per_sec": 0 00:30:20.850 }, 00:30:20.850 "claimed": false, 00:30:20.850 "zoned": false, 00:30:20.850 "supported_io_types": { 00:30:20.850 "read": true, 00:30:20.850 "write": true, 00:30:20.850 "unmap": true, 00:30:20.850 "flush": true, 00:30:20.850 "reset": true, 00:30:20.850 "nvme_admin": false, 00:30:20.850 "nvme_io": false, 00:30:20.850 "nvme_io_md": false, 00:30:20.850 "write_zeroes": true, 00:30:20.850 "zcopy": false, 00:30:20.850 "get_zone_info": false, 00:30:20.850 "zone_management": false, 00:30:20.850 "zone_append": false, 00:30:20.850 "compare": false, 00:30:20.850 "compare_and_write": false, 00:30:20.850 "abort": false, 00:30:20.850 "seek_hole": false, 00:30:20.850 "seek_data": false, 00:30:20.850 "copy": false, 00:30:20.850 "nvme_iov_md": false 00:30:20.850 }, 00:30:20.850 "memory_domains": [ 00:30:20.850 { 00:30:20.850 "dma_device_id": "system", 00:30:20.850 "dma_device_type": 1 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.850 "dma_device_type": 2 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "dma_device_id": "system", 00:30:20.850 "dma_device_type": 1 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.850 "dma_device_type": 2 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "dma_device_id": "system", 00:30:20.850 "dma_device_type": 1 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.850 "dma_device_type": 2 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "dma_device_id": "system", 00:30:20.850 "dma_device_type": 1 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:20.850 "dma_device_type": 2 00:30:20.850 } 00:30:20.850 ], 00:30:20.850 "driver_specific": { 00:30:20.850 "raid": { 00:30:20.850 "uuid": "7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700", 00:30:20.850 "strip_size_kb": 64, 00:30:20.850 "state": "online", 00:30:20.850 "raid_level": "concat", 00:30:20.850 "superblock": true, 00:30:20.850 "num_base_bdevs": 4, 00:30:20.850 "num_base_bdevs_discovered": 4, 00:30:20.850 "num_base_bdevs_operational": 4, 00:30:20.850 "base_bdevs_list": [ 00:30:20.850 { 00:30:20.850 "name": "pt1", 00:30:20.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:20.850 "is_configured": true, 00:30:20.850 "data_offset": 2048, 00:30:20.850 "data_size": 63488 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "name": "pt2", 00:30:20.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:20.850 "is_configured": true, 00:30:20.850 "data_offset": 2048, 00:30:20.850 "data_size": 63488 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "name": "pt3", 00:30:20.850 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:20.850 "is_configured": true, 00:30:20.850 "data_offset": 2048, 00:30:20.850 "data_size": 63488 00:30:20.850 }, 00:30:20.850 { 00:30:20.850 "name": "pt4", 00:30:20.850 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:20.850 "is_configured": true, 00:30:20.850 "data_offset": 2048, 00:30:20.850 "data_size": 63488 00:30:20.850 } 00:30:20.850 ] 00:30:20.850 } 00:30:20.850 } 00:30:20.850 }' 00:30:20.850 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:21.109 pt2 00:30:21.109 pt3 00:30:21.109 pt4' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.109 18:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.109 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:21.109 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:21.109 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:21.109 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:30:21.109 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.109 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:21.109 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.109 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:21.368 [2024-12-06 18:28:52.066361] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700 '!=' 7a5fe7fa-a1c6-45f3-a5fd-c526b8af2700 ']' 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72348 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72348 ']' 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72348 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72348 00:30:21.368 killing process with pid 72348 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72348' 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72348 00:30:21.368 [2024-12-06 18:28:52.155866] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:21.368 [2024-12-06 18:28:52.155940] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:21.368 18:28:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72348 00:30:21.368 [2024-12-06 18:28:52.156011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:21.368 [2024-12-06 18:28:52.156022] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:30:21.627 [2024-12-06 18:28:52.562198] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:23.003 ************************************ 00:30:23.003 END TEST raid_superblock_test 00:30:23.003 ************************************ 00:30:23.003 18:28:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:30:23.003 00:30:23.003 real 0m5.578s 00:30:23.003 user 0m7.937s 00:30:23.003 sys 0m1.094s 00:30:23.003 18:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.003 18:28:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:23.003 18:28:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:30:23.003 18:28:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:23.004 18:28:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.004 18:28:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:23.004 ************************************ 00:30:23.004 START TEST raid_read_error_test 00:30:23.004 ************************************ 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.MqO6hPY6nt 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72608 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72608 00:30:23.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72608 ']' 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:23.004 18:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:23.004 [2024-12-06 18:28:53.914957] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:23.004 [2024-12-06 18:28:53.915083] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72608 ] 00:30:23.263 [2024-12-06 18:28:54.103797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.521 [2024-12-06 18:28:54.218379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.521 [2024-12-06 18:28:54.427674] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:23.521 [2024-12-06 18:28:54.427732] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.087 BaseBdev1_malloc 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.087 true 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.087 [2024-12-06 18:28:54.836467] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:24.087 [2024-12-06 18:28:54.836528] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.087 [2024-12-06 18:28:54.836550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:24.087 [2024-12-06 18:28:54.836564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.087 [2024-12-06 18:28:54.838920] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.087 [2024-12-06 18:28:54.839095] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:24.087 BaseBdev1 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.087 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.087 BaseBdev2_malloc 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.088 true 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.088 [2024-12-06 18:28:54.904807] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:24.088 [2024-12-06 18:28:54.904865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.088 [2024-12-06 18:28:54.904884] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:24.088 [2024-12-06 18:28:54.904898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.088 [2024-12-06 18:28:54.907315] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.088 [2024-12-06 18:28:54.907358] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:24.088 BaseBdev2 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.088 BaseBdev3_malloc 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.088 true 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.088 [2024-12-06 18:28:54.987671] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:30:24.088 [2024-12-06 18:28:54.987725] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.088 [2024-12-06 18:28:54.987743] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:24.088 [2024-12-06 18:28:54.987757] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.088 [2024-12-06 18:28:54.990135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.088 [2024-12-06 18:28:54.990185] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:24.088 BaseBdev3 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.088 18:28:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.346 BaseBdev4_malloc 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.346 true 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.346 [2024-12-06 18:28:55.057744] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:30:24.346 [2024-12-06 18:28:55.057968] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:24.346 [2024-12-06 18:28:55.058033] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:24.346 [2024-12-06 18:28:55.058122] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:24.346 [2024-12-06 18:28:55.061058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:24.346 [2024-12-06 18:28:55.061227] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:24.346 BaseBdev4 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.346 [2024-12-06 18:28:55.069933] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:24.346 [2024-12-06 18:28:55.072210] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:24.346 [2024-12-06 18:28:55.072284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:24.346 [2024-12-06 18:28:55.072346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:24.346 [2024-12-06 18:28:55.072557] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:30:24.346 [2024-12-06 18:28:55.072574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:30:24.346 [2024-12-06 18:28:55.072826] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:30:24.346 [2024-12-06 18:28:55.072980] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:30:24.346 [2024-12-06 18:28:55.072993] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:30:24.346 [2024-12-06 18:28:55.073312] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:24.346 "name": "raid_bdev1", 00:30:24.346 "uuid": "922ffaab-b136-4834-90d4-aec21322bc5d", 00:30:24.346 "strip_size_kb": 64, 00:30:24.346 "state": "online", 00:30:24.346 "raid_level": "concat", 00:30:24.346 "superblock": true, 00:30:24.346 "num_base_bdevs": 4, 00:30:24.346 "num_base_bdevs_discovered": 4, 00:30:24.346 "num_base_bdevs_operational": 4, 00:30:24.346 "base_bdevs_list": [ 00:30:24.346 { 00:30:24.346 "name": "BaseBdev1", 00:30:24.346 "uuid": "a09ac49c-f056-598d-bdec-a5c64c2ddd54", 00:30:24.346 "is_configured": true, 00:30:24.346 "data_offset": 2048, 00:30:24.346 "data_size": 63488 00:30:24.346 }, 00:30:24.346 { 00:30:24.346 "name": "BaseBdev2", 00:30:24.346 "uuid": "c69400d5-4bb3-5fdd-8938-2bb4abb5b85a", 00:30:24.346 "is_configured": true, 00:30:24.346 "data_offset": 2048, 00:30:24.346 "data_size": 63488 00:30:24.346 }, 00:30:24.346 { 00:30:24.346 "name": "BaseBdev3", 00:30:24.346 "uuid": "6e808a90-6142-5b8c-ba80-cdad7d02749e", 00:30:24.346 "is_configured": true, 00:30:24.346 "data_offset": 2048, 00:30:24.346 "data_size": 63488 00:30:24.346 }, 00:30:24.346 { 00:30:24.346 "name": "BaseBdev4", 00:30:24.346 "uuid": "a19f426e-10b3-5d3f-80b5-10e0e05f55a7", 00:30:24.346 "is_configured": true, 00:30:24.346 "data_offset": 2048, 00:30:24.346 "data_size": 63488 00:30:24.346 } 00:30:24.346 ] 00:30:24.346 }' 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:24.346 18:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:24.604 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:24.604 18:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:24.862 [2024-12-06 18:28:55.587239] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:25.806 "name": "raid_bdev1", 00:30:25.806 "uuid": "922ffaab-b136-4834-90d4-aec21322bc5d", 00:30:25.806 "strip_size_kb": 64, 00:30:25.806 "state": "online", 00:30:25.806 "raid_level": "concat", 00:30:25.806 "superblock": true, 00:30:25.806 "num_base_bdevs": 4, 00:30:25.806 "num_base_bdevs_discovered": 4, 00:30:25.806 "num_base_bdevs_operational": 4, 00:30:25.806 "base_bdevs_list": [ 00:30:25.806 { 00:30:25.806 "name": "BaseBdev1", 00:30:25.806 "uuid": "a09ac49c-f056-598d-bdec-a5c64c2ddd54", 00:30:25.806 "is_configured": true, 00:30:25.806 "data_offset": 2048, 00:30:25.806 "data_size": 63488 00:30:25.806 }, 00:30:25.806 { 00:30:25.806 "name": "BaseBdev2", 00:30:25.806 "uuid": "c69400d5-4bb3-5fdd-8938-2bb4abb5b85a", 00:30:25.806 "is_configured": true, 00:30:25.806 "data_offset": 2048, 00:30:25.806 "data_size": 63488 00:30:25.806 }, 00:30:25.806 { 00:30:25.806 "name": "BaseBdev3", 00:30:25.806 "uuid": "6e808a90-6142-5b8c-ba80-cdad7d02749e", 00:30:25.806 "is_configured": true, 00:30:25.806 "data_offset": 2048, 00:30:25.806 "data_size": 63488 00:30:25.806 }, 00:30:25.806 { 00:30:25.806 "name": "BaseBdev4", 00:30:25.806 "uuid": "a19f426e-10b3-5d3f-80b5-10e0e05f55a7", 00:30:25.806 "is_configured": true, 00:30:25.806 "data_offset": 2048, 00:30:25.806 "data_size": 63488 00:30:25.806 } 00:30:25.806 ] 00:30:25.806 }' 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:25.806 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:26.200 [2024-12-06 18:28:56.923890] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:26.200 [2024-12-06 18:28:56.923935] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:26.200 [2024-12-06 18:28:56.926804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:26.200 { 00:30:26.200 "results": [ 00:30:26.200 { 00:30:26.200 "job": "raid_bdev1", 00:30:26.200 "core_mask": "0x1", 00:30:26.200 "workload": "randrw", 00:30:26.200 "percentage": 50, 00:30:26.200 "status": "finished", 00:30:26.200 "queue_depth": 1, 00:30:26.200 "io_size": 131072, 00:30:26.200 "runtime": 1.336878, 00:30:26.200 "iops": 15447.931673645613, 00:30:26.200 "mibps": 1930.9914592057016, 00:30:26.200 "io_failed": 1, 00:30:26.200 "io_timeout": 0, 00:30:26.200 "avg_latency_us": 89.19284447138284, 00:30:26.200 "min_latency_us": 27.347791164658634, 00:30:26.200 "max_latency_us": 1467.3220883534136 00:30:26.200 } 00:30:26.200 ], 00:30:26.200 "core_count": 1 00:30:26.200 } 00:30:26.200 [2024-12-06 18:28:56.927012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:26.200 [2024-12-06 18:28:56.927086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:26.200 [2024-12-06 18:28:56.927105] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72608 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72608 ']' 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72608 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.200 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72608 00:30:26.201 killing process with pid 72608 00:30:26.201 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.201 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.201 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72608' 00:30:26.201 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72608 00:30:26.201 [2024-12-06 18:28:56.969329] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:26.201 18:28:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72608 00:30:26.460 [2024-12-06 18:28:57.310294] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.MqO6hPY6nt 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.75 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.75 != \0\.\0\0 ]] 00:30:27.836 00:30:27.836 real 0m4.743s 00:30:27.836 user 0m5.519s 00:30:27.836 sys 0m0.618s 00:30:27.836 ************************************ 00:30:27.836 END TEST raid_read_error_test 00:30:27.836 ************************************ 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:27.836 18:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.836 18:28:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:30:27.836 18:28:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:27.836 18:28:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:27.836 18:28:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:27.836 ************************************ 00:30:27.836 START TEST raid_write_error_test 00:30:27.836 ************************************ 00:30:27.836 18:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:30:27.836 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:30:27.836 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:30:27.836 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.34QqSpoHfd 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72758 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:30:27.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72758 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72758 ']' 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:27.837 18:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:27.837 [2024-12-06 18:28:58.744098] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:27.837 [2024-12-06 18:28:58.744240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72758 ] 00:30:28.095 [2024-12-06 18:28:58.925110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:28.355 [2024-12-06 18:28:59.044581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:28.355 [2024-12-06 18:28:59.257246] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:28.355 [2024-12-06 18:28:59.257517] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 BaseBdev1_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 true 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 [2024-12-06 18:28:59.708385] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:30:28.923 [2024-12-06 18:28:59.708580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.923 [2024-12-06 18:28:59.708611] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:30:28.923 [2024-12-06 18:28:59.708625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.923 [2024-12-06 18:28:59.711090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.923 [2024-12-06 18:28:59.711138] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:30:28.923 BaseBdev1 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 BaseBdev2_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 true 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 [2024-12-06 18:28:59.777915] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:30:28.923 [2024-12-06 18:28:59.777976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.923 [2024-12-06 18:28:59.777995] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:30:28.923 [2024-12-06 18:28:59.778008] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.923 [2024-12-06 18:28:59.780446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.923 [2024-12-06 18:28:59.780491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:30:28.923 BaseBdev2 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 BaseBdev3_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 true 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:28.923 [2024-12-06 18:28:59.858710] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:30:28.923 [2024-12-06 18:28:59.858773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:28.923 [2024-12-06 18:28:59.858792] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:30:28.923 [2024-12-06 18:28:59.858805] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:28.923 [2024-12-06 18:28:59.861167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:28.923 [2024-12-06 18:28:59.861208] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:30:28.923 BaseBdev3 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:28.923 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.183 BaseBdev4_malloc 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.183 true 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.183 [2024-12-06 18:28:59.926014] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:30:29.183 [2024-12-06 18:28:59.926072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:29.183 [2024-12-06 18:28:59.926092] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:29.183 [2024-12-06 18:28:59.926106] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:29.183 [2024-12-06 18:28:59.928461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:29.183 [2024-12-06 18:28:59.928507] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:30:29.183 BaseBdev4 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.183 [2024-12-06 18:28:59.938056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:29.183 [2024-12-06 18:28:59.940127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:29.183 [2024-12-06 18:28:59.940213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:29.183 [2024-12-06 18:28:59.940276] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:29.183 [2024-12-06 18:28:59.940486] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:30:29.183 [2024-12-06 18:28:59.940503] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:30:29.183 [2024-12-06 18:28:59.940747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:30:29.183 [2024-12-06 18:28:59.940902] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:30:29.183 [2024-12-06 18:28:59.940914] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:30:29.183 [2024-12-06 18:28:59.941066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:29.183 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:29.184 "name": "raid_bdev1", 00:30:29.184 "uuid": "12644777-6b4a-43ef-82b1-7fa8f835de5c", 00:30:29.184 "strip_size_kb": 64, 00:30:29.184 "state": "online", 00:30:29.184 "raid_level": "concat", 00:30:29.184 "superblock": true, 00:30:29.184 "num_base_bdevs": 4, 00:30:29.184 "num_base_bdevs_discovered": 4, 00:30:29.184 "num_base_bdevs_operational": 4, 00:30:29.184 "base_bdevs_list": [ 00:30:29.184 { 00:30:29.184 "name": "BaseBdev1", 00:30:29.184 "uuid": "62f81d20-746a-51f8-a612-1dce3b26fdbb", 00:30:29.184 "is_configured": true, 00:30:29.184 "data_offset": 2048, 00:30:29.184 "data_size": 63488 00:30:29.184 }, 00:30:29.184 { 00:30:29.184 "name": "BaseBdev2", 00:30:29.184 "uuid": "85c17bb0-15a6-588a-95e5-d292216c7a0a", 00:30:29.184 "is_configured": true, 00:30:29.184 "data_offset": 2048, 00:30:29.184 "data_size": 63488 00:30:29.184 }, 00:30:29.184 { 00:30:29.184 "name": "BaseBdev3", 00:30:29.184 "uuid": "b36f61a7-2d29-58f8-a4d5-c996c755cadb", 00:30:29.184 "is_configured": true, 00:30:29.184 "data_offset": 2048, 00:30:29.184 "data_size": 63488 00:30:29.184 }, 00:30:29.184 { 00:30:29.184 "name": "BaseBdev4", 00:30:29.184 "uuid": "f87a04bc-f680-58e6-8fc2-f772b2f4f2fc", 00:30:29.184 "is_configured": true, 00:30:29.184 "data_offset": 2048, 00:30:29.184 "data_size": 63488 00:30:29.184 } 00:30:29.184 ] 00:30:29.184 }' 00:30:29.184 18:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:29.184 18:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:29.763 18:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:30:29.764 18:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:30:29.764 [2024-12-06 18:29:00.487059] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:30.701 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:30.701 "name": "raid_bdev1", 00:30:30.701 "uuid": "12644777-6b4a-43ef-82b1-7fa8f835de5c", 00:30:30.701 "strip_size_kb": 64, 00:30:30.701 "state": "online", 00:30:30.701 "raid_level": "concat", 00:30:30.701 "superblock": true, 00:30:30.701 "num_base_bdevs": 4, 00:30:30.701 "num_base_bdevs_discovered": 4, 00:30:30.701 "num_base_bdevs_operational": 4, 00:30:30.701 "base_bdevs_list": [ 00:30:30.701 { 00:30:30.701 "name": "BaseBdev1", 00:30:30.701 "uuid": "62f81d20-746a-51f8-a612-1dce3b26fdbb", 00:30:30.701 "is_configured": true, 00:30:30.701 "data_offset": 2048, 00:30:30.701 "data_size": 63488 00:30:30.701 }, 00:30:30.701 { 00:30:30.701 "name": "BaseBdev2", 00:30:30.701 "uuid": "85c17bb0-15a6-588a-95e5-d292216c7a0a", 00:30:30.701 "is_configured": true, 00:30:30.701 "data_offset": 2048, 00:30:30.702 "data_size": 63488 00:30:30.702 }, 00:30:30.702 { 00:30:30.702 "name": "BaseBdev3", 00:30:30.702 "uuid": "b36f61a7-2d29-58f8-a4d5-c996c755cadb", 00:30:30.702 "is_configured": true, 00:30:30.702 "data_offset": 2048, 00:30:30.702 "data_size": 63488 00:30:30.702 }, 00:30:30.702 { 00:30:30.702 "name": "BaseBdev4", 00:30:30.702 "uuid": "f87a04bc-f680-58e6-8fc2-f772b2f4f2fc", 00:30:30.702 "is_configured": true, 00:30:30.702 "data_offset": 2048, 00:30:30.702 "data_size": 63488 00:30:30.702 } 00:30:30.702 ] 00:30:30.702 }' 00:30:30.702 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:30.702 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.961 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:30.961 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:30.961 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:30.961 [2024-12-06 18:29:01.900661] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:30.961 [2024-12-06 18:29:01.900697] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:30.961 { 00:30:30.961 "results": [ 00:30:30.961 { 00:30:30.961 "job": "raid_bdev1", 00:30:30.961 "core_mask": "0x1", 00:30:30.961 "workload": "randrw", 00:30:30.961 "percentage": 50, 00:30:30.961 "status": "finished", 00:30:30.961 "queue_depth": 1, 00:30:30.961 "io_size": 131072, 00:30:30.961 "runtime": 1.413431, 00:30:30.961 "iops": 14606.301970170458, 00:30:30.961 "mibps": 1825.7877462713072, 00:30:30.961 "io_failed": 1, 00:30:30.961 "io_timeout": 0, 00:30:30.961 "avg_latency_us": 94.23209326699416, 00:30:30.961 "min_latency_us": 27.553413654618474, 00:30:30.961 "max_latency_us": 1552.8610441767069 00:30:30.961 } 00:30:30.961 ], 00:30:30.961 "core_count": 1 00:30:30.961 } 00:30:30.961 [2024-12-06 18:29:01.903807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:30.961 [2024-12-06 18:29:01.903869] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:30.961 [2024-12-06 18:29:01.903912] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:30.961 [2024-12-06 18:29:01.903929] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72758 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72758 ']' 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72758 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72758 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:31.221 killing process with pid 72758 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72758' 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72758 00:30:31.221 [2024-12-06 18:29:01.959624] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:31.221 18:29:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72758 00:30:31.480 [2024-12-06 18:29:02.301185] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.34QqSpoHfd 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:30:32.860 ************************************ 00:30:32.860 END TEST raid_write_error_test 00:30:32.860 ************************************ 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:30:32.860 00:30:32.860 real 0m4.915s 00:30:32.860 user 0m5.829s 00:30:32.860 sys 0m0.664s 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.860 18:29:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.860 18:29:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:30:32.860 18:29:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:30:32.860 18:29:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:32.860 18:29:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:32.860 18:29:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:32.860 ************************************ 00:30:32.860 START TEST raid_state_function_test 00:30:32.860 ************************************ 00:30:32.860 18:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:30:32.860 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:30:32.860 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:30:32.860 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:30:32.860 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:32.860 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72902 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72902' 00:30:32.861 Process raid pid: 72902 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72902 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72902 ']' 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:32.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:32.861 18:29:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:32.861 [2024-12-06 18:29:03.729938] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:32.861 [2024-12-06 18:29:03.730265] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:33.119 [2024-12-06 18:29:03.911529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:33.119 [2024-12-06 18:29:04.026162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:33.377 [2024-12-06 18:29:04.244704] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.377 [2024-12-06 18:29:04.244745] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.945 [2024-12-06 18:29:04.684321] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:33.945 [2024-12-06 18:29:04.684514] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:33.945 [2024-12-06 18:29:04.684631] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:33.945 [2024-12-06 18:29:04.684679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:33.945 [2024-12-06 18:29:04.684708] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:33.945 [2024-12-06 18:29:04.684723] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:33.945 [2024-12-06 18:29:04.684738] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:33.945 [2024-12-06 18:29:04.684750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:33.945 "name": "Existed_Raid", 00:30:33.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.945 "strip_size_kb": 0, 00:30:33.945 "state": "configuring", 00:30:33.945 "raid_level": "raid1", 00:30:33.945 "superblock": false, 00:30:33.945 "num_base_bdevs": 4, 00:30:33.945 "num_base_bdevs_discovered": 0, 00:30:33.945 "num_base_bdevs_operational": 4, 00:30:33.945 "base_bdevs_list": [ 00:30:33.945 { 00:30:33.945 "name": "BaseBdev1", 00:30:33.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.945 "is_configured": false, 00:30:33.945 "data_offset": 0, 00:30:33.945 "data_size": 0 00:30:33.945 }, 00:30:33.945 { 00:30:33.945 "name": "BaseBdev2", 00:30:33.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.945 "is_configured": false, 00:30:33.945 "data_offset": 0, 00:30:33.945 "data_size": 0 00:30:33.945 }, 00:30:33.945 { 00:30:33.945 "name": "BaseBdev3", 00:30:33.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.945 "is_configured": false, 00:30:33.945 "data_offset": 0, 00:30:33.945 "data_size": 0 00:30:33.945 }, 00:30:33.945 { 00:30:33.945 "name": "BaseBdev4", 00:30:33.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:33.945 "is_configured": false, 00:30:33.945 "data_offset": 0, 00:30:33.945 "data_size": 0 00:30:33.945 } 00:30:33.945 ] 00:30:33.945 }' 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:33.945 18:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.204 [2024-12-06 18:29:05.120190] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.204 [2024-12-06 18:29:05.120230] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.204 [2024-12-06 18:29:05.132160] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:34.204 [2024-12-06 18:29:05.132322] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:34.204 [2024-12-06 18:29:05.132402] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:34.204 [2024-12-06 18:29:05.132446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:34.204 [2024-12-06 18:29:05.132474] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:34.204 [2024-12-06 18:29:05.132506] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:34.204 [2024-12-06 18:29:05.132533] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:34.204 [2024-12-06 18:29:05.132564] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.204 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.462 [2024-12-06 18:29:05.178598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:34.462 BaseBdev1 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.462 [ 00:30:34.462 { 00:30:34.462 "name": "BaseBdev1", 00:30:34.462 "aliases": [ 00:30:34.462 "3b645ee8-2db8-4d6c-a603-e0c097d00fa6" 00:30:34.462 ], 00:30:34.462 "product_name": "Malloc disk", 00:30:34.462 "block_size": 512, 00:30:34.462 "num_blocks": 65536, 00:30:34.462 "uuid": "3b645ee8-2db8-4d6c-a603-e0c097d00fa6", 00:30:34.462 "assigned_rate_limits": { 00:30:34.462 "rw_ios_per_sec": 0, 00:30:34.462 "rw_mbytes_per_sec": 0, 00:30:34.462 "r_mbytes_per_sec": 0, 00:30:34.462 "w_mbytes_per_sec": 0 00:30:34.462 }, 00:30:34.462 "claimed": true, 00:30:34.462 "claim_type": "exclusive_write", 00:30:34.462 "zoned": false, 00:30:34.462 "supported_io_types": { 00:30:34.462 "read": true, 00:30:34.462 "write": true, 00:30:34.462 "unmap": true, 00:30:34.462 "flush": true, 00:30:34.462 "reset": true, 00:30:34.462 "nvme_admin": false, 00:30:34.462 "nvme_io": false, 00:30:34.462 "nvme_io_md": false, 00:30:34.462 "write_zeroes": true, 00:30:34.462 "zcopy": true, 00:30:34.462 "get_zone_info": false, 00:30:34.462 "zone_management": false, 00:30:34.462 "zone_append": false, 00:30:34.462 "compare": false, 00:30:34.462 "compare_and_write": false, 00:30:34.462 "abort": true, 00:30:34.462 "seek_hole": false, 00:30:34.462 "seek_data": false, 00:30:34.462 "copy": true, 00:30:34.462 "nvme_iov_md": false 00:30:34.462 }, 00:30:34.462 "memory_domains": [ 00:30:34.462 { 00:30:34.462 "dma_device_id": "system", 00:30:34.462 "dma_device_type": 1 00:30:34.462 }, 00:30:34.462 { 00:30:34.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:34.462 "dma_device_type": 2 00:30:34.462 } 00:30:34.462 ], 00:30:34.462 "driver_specific": {} 00:30:34.462 } 00:30:34.462 ] 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.462 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.462 "name": "Existed_Raid", 00:30:34.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.462 "strip_size_kb": 0, 00:30:34.462 "state": "configuring", 00:30:34.463 "raid_level": "raid1", 00:30:34.463 "superblock": false, 00:30:34.463 "num_base_bdevs": 4, 00:30:34.463 "num_base_bdevs_discovered": 1, 00:30:34.463 "num_base_bdevs_operational": 4, 00:30:34.463 "base_bdevs_list": [ 00:30:34.463 { 00:30:34.463 "name": "BaseBdev1", 00:30:34.463 "uuid": "3b645ee8-2db8-4d6c-a603-e0c097d00fa6", 00:30:34.463 "is_configured": true, 00:30:34.463 "data_offset": 0, 00:30:34.463 "data_size": 65536 00:30:34.463 }, 00:30:34.463 { 00:30:34.463 "name": "BaseBdev2", 00:30:34.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.463 "is_configured": false, 00:30:34.463 "data_offset": 0, 00:30:34.463 "data_size": 0 00:30:34.463 }, 00:30:34.463 { 00:30:34.463 "name": "BaseBdev3", 00:30:34.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.463 "is_configured": false, 00:30:34.463 "data_offset": 0, 00:30:34.463 "data_size": 0 00:30:34.463 }, 00:30:34.463 { 00:30:34.463 "name": "BaseBdev4", 00:30:34.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.463 "is_configured": false, 00:30:34.463 "data_offset": 0, 00:30:34.463 "data_size": 0 00:30:34.463 } 00:30:34.463 ] 00:30:34.463 }' 00:30:34.463 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.463 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.720 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:34.720 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.720 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.978 [2024-12-06 18:29:05.670114] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:34.978 [2024-12-06 18:29:05.670303] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:34.978 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.978 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:34.978 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.978 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.978 [2024-12-06 18:29:05.682136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:34.978 [2024-12-06 18:29:05.684323] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:34.978 [2024-12-06 18:29:05.684472] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:34.979 [2024-12-06 18:29:05.684554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:34.979 [2024-12-06 18:29:05.684601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:34.979 [2024-12-06 18:29:05.684629] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:34.979 [2024-12-06 18:29:05.684661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:34.979 "name": "Existed_Raid", 00:30:34.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.979 "strip_size_kb": 0, 00:30:34.979 "state": "configuring", 00:30:34.979 "raid_level": "raid1", 00:30:34.979 "superblock": false, 00:30:34.979 "num_base_bdevs": 4, 00:30:34.979 "num_base_bdevs_discovered": 1, 00:30:34.979 "num_base_bdevs_operational": 4, 00:30:34.979 "base_bdevs_list": [ 00:30:34.979 { 00:30:34.979 "name": "BaseBdev1", 00:30:34.979 "uuid": "3b645ee8-2db8-4d6c-a603-e0c097d00fa6", 00:30:34.979 "is_configured": true, 00:30:34.979 "data_offset": 0, 00:30:34.979 "data_size": 65536 00:30:34.979 }, 00:30:34.979 { 00:30:34.979 "name": "BaseBdev2", 00:30:34.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.979 "is_configured": false, 00:30:34.979 "data_offset": 0, 00:30:34.979 "data_size": 0 00:30:34.979 }, 00:30:34.979 { 00:30:34.979 "name": "BaseBdev3", 00:30:34.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.979 "is_configured": false, 00:30:34.979 "data_offset": 0, 00:30:34.979 "data_size": 0 00:30:34.979 }, 00:30:34.979 { 00:30:34.979 "name": "BaseBdev4", 00:30:34.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:34.979 "is_configured": false, 00:30:34.979 "data_offset": 0, 00:30:34.979 "data_size": 0 00:30:34.979 } 00:30:34.979 ] 00:30:34.979 }' 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:34.979 18:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.238 [2024-12-06 18:29:06.131479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:35.238 BaseBdev2 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.238 [ 00:30:35.238 { 00:30:35.238 "name": "BaseBdev2", 00:30:35.238 "aliases": [ 00:30:35.238 "667eb943-1d2d-427b-85a8-c5e2a4563596" 00:30:35.238 ], 00:30:35.238 "product_name": "Malloc disk", 00:30:35.238 "block_size": 512, 00:30:35.238 "num_blocks": 65536, 00:30:35.238 "uuid": "667eb943-1d2d-427b-85a8-c5e2a4563596", 00:30:35.238 "assigned_rate_limits": { 00:30:35.238 "rw_ios_per_sec": 0, 00:30:35.238 "rw_mbytes_per_sec": 0, 00:30:35.238 "r_mbytes_per_sec": 0, 00:30:35.238 "w_mbytes_per_sec": 0 00:30:35.238 }, 00:30:35.238 "claimed": true, 00:30:35.238 "claim_type": "exclusive_write", 00:30:35.238 "zoned": false, 00:30:35.238 "supported_io_types": { 00:30:35.238 "read": true, 00:30:35.238 "write": true, 00:30:35.238 "unmap": true, 00:30:35.238 "flush": true, 00:30:35.238 "reset": true, 00:30:35.238 "nvme_admin": false, 00:30:35.238 "nvme_io": false, 00:30:35.238 "nvme_io_md": false, 00:30:35.238 "write_zeroes": true, 00:30:35.238 "zcopy": true, 00:30:35.238 "get_zone_info": false, 00:30:35.238 "zone_management": false, 00:30:35.238 "zone_append": false, 00:30:35.238 "compare": false, 00:30:35.238 "compare_and_write": false, 00:30:35.238 "abort": true, 00:30:35.238 "seek_hole": false, 00:30:35.238 "seek_data": false, 00:30:35.238 "copy": true, 00:30:35.238 "nvme_iov_md": false 00:30:35.238 }, 00:30:35.238 "memory_domains": [ 00:30:35.238 { 00:30:35.238 "dma_device_id": "system", 00:30:35.238 "dma_device_type": 1 00:30:35.238 }, 00:30:35.238 { 00:30:35.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:35.238 "dma_device_type": 2 00:30:35.238 } 00:30:35.238 ], 00:30:35.238 "driver_specific": {} 00:30:35.238 } 00:30:35.238 ] 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:35.238 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:35.500 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:35.500 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.500 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.500 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.500 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.500 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:35.500 "name": "Existed_Raid", 00:30:35.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.500 "strip_size_kb": 0, 00:30:35.500 "state": "configuring", 00:30:35.500 "raid_level": "raid1", 00:30:35.500 "superblock": false, 00:30:35.500 "num_base_bdevs": 4, 00:30:35.500 "num_base_bdevs_discovered": 2, 00:30:35.500 "num_base_bdevs_operational": 4, 00:30:35.500 "base_bdevs_list": [ 00:30:35.500 { 00:30:35.500 "name": "BaseBdev1", 00:30:35.500 "uuid": "3b645ee8-2db8-4d6c-a603-e0c097d00fa6", 00:30:35.500 "is_configured": true, 00:30:35.500 "data_offset": 0, 00:30:35.500 "data_size": 65536 00:30:35.500 }, 00:30:35.500 { 00:30:35.500 "name": "BaseBdev2", 00:30:35.500 "uuid": "667eb943-1d2d-427b-85a8-c5e2a4563596", 00:30:35.500 "is_configured": true, 00:30:35.500 "data_offset": 0, 00:30:35.500 "data_size": 65536 00:30:35.500 }, 00:30:35.500 { 00:30:35.500 "name": "BaseBdev3", 00:30:35.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.500 "is_configured": false, 00:30:35.500 "data_offset": 0, 00:30:35.500 "data_size": 0 00:30:35.500 }, 00:30:35.500 { 00:30:35.500 "name": "BaseBdev4", 00:30:35.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:35.500 "is_configured": false, 00:30:35.500 "data_offset": 0, 00:30:35.500 "data_size": 0 00:30:35.500 } 00:30:35.500 ] 00:30:35.500 }' 00:30:35.500 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:35.500 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.760 [2024-12-06 18:29:06.618335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:35.760 BaseBdev3 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.760 [ 00:30:35.760 { 00:30:35.760 "name": "BaseBdev3", 00:30:35.760 "aliases": [ 00:30:35.760 "175c2364-b2ca-4145-b4bb-c8de26f1111b" 00:30:35.760 ], 00:30:35.760 "product_name": "Malloc disk", 00:30:35.760 "block_size": 512, 00:30:35.760 "num_blocks": 65536, 00:30:35.760 "uuid": "175c2364-b2ca-4145-b4bb-c8de26f1111b", 00:30:35.760 "assigned_rate_limits": { 00:30:35.760 "rw_ios_per_sec": 0, 00:30:35.760 "rw_mbytes_per_sec": 0, 00:30:35.760 "r_mbytes_per_sec": 0, 00:30:35.760 "w_mbytes_per_sec": 0 00:30:35.760 }, 00:30:35.760 "claimed": true, 00:30:35.760 "claim_type": "exclusive_write", 00:30:35.760 "zoned": false, 00:30:35.760 "supported_io_types": { 00:30:35.760 "read": true, 00:30:35.760 "write": true, 00:30:35.760 "unmap": true, 00:30:35.760 "flush": true, 00:30:35.760 "reset": true, 00:30:35.760 "nvme_admin": false, 00:30:35.760 "nvme_io": false, 00:30:35.760 "nvme_io_md": false, 00:30:35.760 "write_zeroes": true, 00:30:35.760 "zcopy": true, 00:30:35.760 "get_zone_info": false, 00:30:35.760 "zone_management": false, 00:30:35.760 "zone_append": false, 00:30:35.760 "compare": false, 00:30:35.760 "compare_and_write": false, 00:30:35.760 "abort": true, 00:30:35.760 "seek_hole": false, 00:30:35.760 "seek_data": false, 00:30:35.760 "copy": true, 00:30:35.760 "nvme_iov_md": false 00:30:35.760 }, 00:30:35.760 "memory_domains": [ 00:30:35.760 { 00:30:35.760 "dma_device_id": "system", 00:30:35.760 "dma_device_type": 1 00:30:35.760 }, 00:30:35.760 { 00:30:35.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:35.760 "dma_device_type": 2 00:30:35.760 } 00:30:35.760 ], 00:30:35.760 "driver_specific": {} 00:30:35.760 } 00:30:35.760 ] 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:35.760 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.020 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.020 "name": "Existed_Raid", 00:30:36.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.020 "strip_size_kb": 0, 00:30:36.020 "state": "configuring", 00:30:36.020 "raid_level": "raid1", 00:30:36.020 "superblock": false, 00:30:36.020 "num_base_bdevs": 4, 00:30:36.020 "num_base_bdevs_discovered": 3, 00:30:36.020 "num_base_bdevs_operational": 4, 00:30:36.020 "base_bdevs_list": [ 00:30:36.020 { 00:30:36.020 "name": "BaseBdev1", 00:30:36.020 "uuid": "3b645ee8-2db8-4d6c-a603-e0c097d00fa6", 00:30:36.020 "is_configured": true, 00:30:36.020 "data_offset": 0, 00:30:36.020 "data_size": 65536 00:30:36.020 }, 00:30:36.020 { 00:30:36.020 "name": "BaseBdev2", 00:30:36.020 "uuid": "667eb943-1d2d-427b-85a8-c5e2a4563596", 00:30:36.020 "is_configured": true, 00:30:36.020 "data_offset": 0, 00:30:36.020 "data_size": 65536 00:30:36.020 }, 00:30:36.020 { 00:30:36.020 "name": "BaseBdev3", 00:30:36.020 "uuid": "175c2364-b2ca-4145-b4bb-c8de26f1111b", 00:30:36.020 "is_configured": true, 00:30:36.020 "data_offset": 0, 00:30:36.020 "data_size": 65536 00:30:36.020 }, 00:30:36.020 { 00:30:36.020 "name": "BaseBdev4", 00:30:36.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:36.020 "is_configured": false, 00:30:36.020 "data_offset": 0, 00:30:36.020 "data_size": 0 00:30:36.020 } 00:30:36.020 ] 00:30:36.020 }' 00:30:36.020 18:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.020 18:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.291 [2024-12-06 18:29:07.125236] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:36.291 [2024-12-06 18:29:07.125298] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:36.291 [2024-12-06 18:29:07.125308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:36.291 [2024-12-06 18:29:07.125582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:36.291 [2024-12-06 18:29:07.125778] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:36.291 [2024-12-06 18:29:07.125801] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:36.291 [2024-12-06 18:29:07.126058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:36.291 BaseBdev4 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.291 [ 00:30:36.291 { 00:30:36.291 "name": "BaseBdev4", 00:30:36.291 "aliases": [ 00:30:36.291 "a8d51171-4f07-42f6-86cd-8f8037865f84" 00:30:36.291 ], 00:30:36.291 "product_name": "Malloc disk", 00:30:36.291 "block_size": 512, 00:30:36.291 "num_blocks": 65536, 00:30:36.291 "uuid": "a8d51171-4f07-42f6-86cd-8f8037865f84", 00:30:36.291 "assigned_rate_limits": { 00:30:36.291 "rw_ios_per_sec": 0, 00:30:36.291 "rw_mbytes_per_sec": 0, 00:30:36.291 "r_mbytes_per_sec": 0, 00:30:36.291 "w_mbytes_per_sec": 0 00:30:36.291 }, 00:30:36.291 "claimed": true, 00:30:36.291 "claim_type": "exclusive_write", 00:30:36.291 "zoned": false, 00:30:36.291 "supported_io_types": { 00:30:36.291 "read": true, 00:30:36.291 "write": true, 00:30:36.291 "unmap": true, 00:30:36.291 "flush": true, 00:30:36.291 "reset": true, 00:30:36.291 "nvme_admin": false, 00:30:36.291 "nvme_io": false, 00:30:36.291 "nvme_io_md": false, 00:30:36.291 "write_zeroes": true, 00:30:36.291 "zcopy": true, 00:30:36.291 "get_zone_info": false, 00:30:36.291 "zone_management": false, 00:30:36.291 "zone_append": false, 00:30:36.291 "compare": false, 00:30:36.291 "compare_and_write": false, 00:30:36.291 "abort": true, 00:30:36.291 "seek_hole": false, 00:30:36.291 "seek_data": false, 00:30:36.291 "copy": true, 00:30:36.291 "nvme_iov_md": false 00:30:36.291 }, 00:30:36.291 "memory_domains": [ 00:30:36.291 { 00:30:36.291 "dma_device_id": "system", 00:30:36.291 "dma_device_type": 1 00:30:36.291 }, 00:30:36.291 { 00:30:36.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.291 "dma_device_type": 2 00:30:36.291 } 00:30:36.291 ], 00:30:36.291 "driver_specific": {} 00:30:36.291 } 00:30:36.291 ] 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:36.291 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:36.292 "name": "Existed_Raid", 00:30:36.292 "uuid": "c2b480d4-9d65-4b12-9226-cf119af37873", 00:30:36.292 "strip_size_kb": 0, 00:30:36.292 "state": "online", 00:30:36.292 "raid_level": "raid1", 00:30:36.292 "superblock": false, 00:30:36.292 "num_base_bdevs": 4, 00:30:36.292 "num_base_bdevs_discovered": 4, 00:30:36.292 "num_base_bdevs_operational": 4, 00:30:36.292 "base_bdevs_list": [ 00:30:36.292 { 00:30:36.292 "name": "BaseBdev1", 00:30:36.292 "uuid": "3b645ee8-2db8-4d6c-a603-e0c097d00fa6", 00:30:36.292 "is_configured": true, 00:30:36.292 "data_offset": 0, 00:30:36.292 "data_size": 65536 00:30:36.292 }, 00:30:36.292 { 00:30:36.292 "name": "BaseBdev2", 00:30:36.292 "uuid": "667eb943-1d2d-427b-85a8-c5e2a4563596", 00:30:36.292 "is_configured": true, 00:30:36.292 "data_offset": 0, 00:30:36.292 "data_size": 65536 00:30:36.292 }, 00:30:36.292 { 00:30:36.292 "name": "BaseBdev3", 00:30:36.292 "uuid": "175c2364-b2ca-4145-b4bb-c8de26f1111b", 00:30:36.292 "is_configured": true, 00:30:36.292 "data_offset": 0, 00:30:36.292 "data_size": 65536 00:30:36.292 }, 00:30:36.292 { 00:30:36.292 "name": "BaseBdev4", 00:30:36.292 "uuid": "a8d51171-4f07-42f6-86cd-8f8037865f84", 00:30:36.292 "is_configured": true, 00:30:36.292 "data_offset": 0, 00:30:36.292 "data_size": 65536 00:30:36.292 } 00:30:36.292 ] 00:30:36.292 }' 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:36.292 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:36.875 [2024-12-06 18:29:07.604913] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.875 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:36.875 "name": "Existed_Raid", 00:30:36.875 "aliases": [ 00:30:36.875 "c2b480d4-9d65-4b12-9226-cf119af37873" 00:30:36.875 ], 00:30:36.875 "product_name": "Raid Volume", 00:30:36.875 "block_size": 512, 00:30:36.875 "num_blocks": 65536, 00:30:36.876 "uuid": "c2b480d4-9d65-4b12-9226-cf119af37873", 00:30:36.876 "assigned_rate_limits": { 00:30:36.876 "rw_ios_per_sec": 0, 00:30:36.876 "rw_mbytes_per_sec": 0, 00:30:36.876 "r_mbytes_per_sec": 0, 00:30:36.876 "w_mbytes_per_sec": 0 00:30:36.876 }, 00:30:36.876 "claimed": false, 00:30:36.876 "zoned": false, 00:30:36.876 "supported_io_types": { 00:30:36.876 "read": true, 00:30:36.876 "write": true, 00:30:36.876 "unmap": false, 00:30:36.876 "flush": false, 00:30:36.876 "reset": true, 00:30:36.876 "nvme_admin": false, 00:30:36.876 "nvme_io": false, 00:30:36.876 "nvme_io_md": false, 00:30:36.876 "write_zeroes": true, 00:30:36.876 "zcopy": false, 00:30:36.876 "get_zone_info": false, 00:30:36.876 "zone_management": false, 00:30:36.876 "zone_append": false, 00:30:36.876 "compare": false, 00:30:36.876 "compare_and_write": false, 00:30:36.876 "abort": false, 00:30:36.876 "seek_hole": false, 00:30:36.876 "seek_data": false, 00:30:36.876 "copy": false, 00:30:36.876 "nvme_iov_md": false 00:30:36.876 }, 00:30:36.876 "memory_domains": [ 00:30:36.876 { 00:30:36.876 "dma_device_id": "system", 00:30:36.876 "dma_device_type": 1 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.876 "dma_device_type": 2 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "dma_device_id": "system", 00:30:36.876 "dma_device_type": 1 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.876 "dma_device_type": 2 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "dma_device_id": "system", 00:30:36.876 "dma_device_type": 1 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.876 "dma_device_type": 2 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "dma_device_id": "system", 00:30:36.876 "dma_device_type": 1 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:36.876 "dma_device_type": 2 00:30:36.876 } 00:30:36.876 ], 00:30:36.876 "driver_specific": { 00:30:36.876 "raid": { 00:30:36.876 "uuid": "c2b480d4-9d65-4b12-9226-cf119af37873", 00:30:36.876 "strip_size_kb": 0, 00:30:36.876 "state": "online", 00:30:36.876 "raid_level": "raid1", 00:30:36.876 "superblock": false, 00:30:36.876 "num_base_bdevs": 4, 00:30:36.876 "num_base_bdevs_discovered": 4, 00:30:36.876 "num_base_bdevs_operational": 4, 00:30:36.876 "base_bdevs_list": [ 00:30:36.876 { 00:30:36.876 "name": "BaseBdev1", 00:30:36.876 "uuid": "3b645ee8-2db8-4d6c-a603-e0c097d00fa6", 00:30:36.876 "is_configured": true, 00:30:36.876 "data_offset": 0, 00:30:36.876 "data_size": 65536 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "name": "BaseBdev2", 00:30:36.876 "uuid": "667eb943-1d2d-427b-85a8-c5e2a4563596", 00:30:36.876 "is_configured": true, 00:30:36.876 "data_offset": 0, 00:30:36.876 "data_size": 65536 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "name": "BaseBdev3", 00:30:36.876 "uuid": "175c2364-b2ca-4145-b4bb-c8de26f1111b", 00:30:36.876 "is_configured": true, 00:30:36.876 "data_offset": 0, 00:30:36.876 "data_size": 65536 00:30:36.876 }, 00:30:36.876 { 00:30:36.876 "name": "BaseBdev4", 00:30:36.876 "uuid": "a8d51171-4f07-42f6-86cd-8f8037865f84", 00:30:36.876 "is_configured": true, 00:30:36.876 "data_offset": 0, 00:30:36.876 "data_size": 65536 00:30:36.876 } 00:30:36.876 ] 00:30:36.876 } 00:30:36.876 } 00:30:36.876 }' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:36.876 BaseBdev2 00:30:36.876 BaseBdev3 00:30:36.876 BaseBdev4' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:36.876 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.136 [2024-12-06 18:29:07.888303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:37.136 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:37.137 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:37.137 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.137 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.137 18:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.137 18:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:37.137 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.137 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:37.137 "name": "Existed_Raid", 00:30:37.137 "uuid": "c2b480d4-9d65-4b12-9226-cf119af37873", 00:30:37.137 "strip_size_kb": 0, 00:30:37.137 "state": "online", 00:30:37.137 "raid_level": "raid1", 00:30:37.137 "superblock": false, 00:30:37.137 "num_base_bdevs": 4, 00:30:37.137 "num_base_bdevs_discovered": 3, 00:30:37.137 "num_base_bdevs_operational": 3, 00:30:37.137 "base_bdevs_list": [ 00:30:37.137 { 00:30:37.137 "name": null, 00:30:37.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:37.137 "is_configured": false, 00:30:37.137 "data_offset": 0, 00:30:37.137 "data_size": 65536 00:30:37.137 }, 00:30:37.137 { 00:30:37.137 "name": "BaseBdev2", 00:30:37.137 "uuid": "667eb943-1d2d-427b-85a8-c5e2a4563596", 00:30:37.137 "is_configured": true, 00:30:37.137 "data_offset": 0, 00:30:37.137 "data_size": 65536 00:30:37.137 }, 00:30:37.137 { 00:30:37.137 "name": "BaseBdev3", 00:30:37.137 "uuid": "175c2364-b2ca-4145-b4bb-c8de26f1111b", 00:30:37.137 "is_configured": true, 00:30:37.137 "data_offset": 0, 00:30:37.137 "data_size": 65536 00:30:37.137 }, 00:30:37.137 { 00:30:37.137 "name": "BaseBdev4", 00:30:37.137 "uuid": "a8d51171-4f07-42f6-86cd-8f8037865f84", 00:30:37.137 "is_configured": true, 00:30:37.137 "data_offset": 0, 00:30:37.137 "data_size": 65536 00:30:37.137 } 00:30:37.137 ] 00:30:37.137 }' 00:30:37.137 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:37.137 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.706 [2024-12-06 18:29:08.432097] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:37.706 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.707 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:37.707 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:37.707 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:37.707 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.707 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.707 [2024-12-06 18:29:08.584428] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.966 [2024-12-06 18:29:08.734434] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:37.966 [2024-12-06 18:29:08.734541] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:37.966 [2024-12-06 18:29:08.831261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:37.966 [2024-12-06 18:29:08.831323] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:37.966 [2024-12-06 18:29:08.831338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:37.966 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:37.967 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.227 BaseBdev2 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.227 [ 00:30:38.227 { 00:30:38.227 "name": "BaseBdev2", 00:30:38.227 "aliases": [ 00:30:38.227 "fae24111-e0c9-4275-816b-78f08be3e3b3" 00:30:38.227 ], 00:30:38.227 "product_name": "Malloc disk", 00:30:38.227 "block_size": 512, 00:30:38.227 "num_blocks": 65536, 00:30:38.227 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:38.227 "assigned_rate_limits": { 00:30:38.227 "rw_ios_per_sec": 0, 00:30:38.227 "rw_mbytes_per_sec": 0, 00:30:38.227 "r_mbytes_per_sec": 0, 00:30:38.227 "w_mbytes_per_sec": 0 00:30:38.227 }, 00:30:38.227 "claimed": false, 00:30:38.227 "zoned": false, 00:30:38.227 "supported_io_types": { 00:30:38.227 "read": true, 00:30:38.227 "write": true, 00:30:38.227 "unmap": true, 00:30:38.227 "flush": true, 00:30:38.227 "reset": true, 00:30:38.227 "nvme_admin": false, 00:30:38.227 "nvme_io": false, 00:30:38.227 "nvme_io_md": false, 00:30:38.227 "write_zeroes": true, 00:30:38.227 "zcopy": true, 00:30:38.227 "get_zone_info": false, 00:30:38.227 "zone_management": false, 00:30:38.227 "zone_append": false, 00:30:38.227 "compare": false, 00:30:38.227 "compare_and_write": false, 00:30:38.227 "abort": true, 00:30:38.227 "seek_hole": false, 00:30:38.227 "seek_data": false, 00:30:38.227 "copy": true, 00:30:38.227 "nvme_iov_md": false 00:30:38.227 }, 00:30:38.227 "memory_domains": [ 00:30:38.227 { 00:30:38.227 "dma_device_id": "system", 00:30:38.227 "dma_device_type": 1 00:30:38.227 }, 00:30:38.227 { 00:30:38.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.227 "dma_device_type": 2 00:30:38.227 } 00:30:38.227 ], 00:30:38.227 "driver_specific": {} 00:30:38.227 } 00:30:38.227 ] 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.227 BaseBdev3 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.227 18:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.227 [ 00:30:38.227 { 00:30:38.227 "name": "BaseBdev3", 00:30:38.227 "aliases": [ 00:30:38.227 "1e21b02d-e9c8-4050-9720-064cff5f76f3" 00:30:38.227 ], 00:30:38.227 "product_name": "Malloc disk", 00:30:38.227 "block_size": 512, 00:30:38.227 "num_blocks": 65536, 00:30:38.227 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:38.227 "assigned_rate_limits": { 00:30:38.227 "rw_ios_per_sec": 0, 00:30:38.227 "rw_mbytes_per_sec": 0, 00:30:38.227 "r_mbytes_per_sec": 0, 00:30:38.227 "w_mbytes_per_sec": 0 00:30:38.227 }, 00:30:38.227 "claimed": false, 00:30:38.227 "zoned": false, 00:30:38.227 "supported_io_types": { 00:30:38.227 "read": true, 00:30:38.227 "write": true, 00:30:38.227 "unmap": true, 00:30:38.227 "flush": true, 00:30:38.227 "reset": true, 00:30:38.227 "nvme_admin": false, 00:30:38.227 "nvme_io": false, 00:30:38.227 "nvme_io_md": false, 00:30:38.227 "write_zeroes": true, 00:30:38.227 "zcopy": true, 00:30:38.227 "get_zone_info": false, 00:30:38.227 "zone_management": false, 00:30:38.227 "zone_append": false, 00:30:38.227 "compare": false, 00:30:38.227 "compare_and_write": false, 00:30:38.227 "abort": true, 00:30:38.227 "seek_hole": false, 00:30:38.227 "seek_data": false, 00:30:38.227 "copy": true, 00:30:38.227 "nvme_iov_md": false 00:30:38.227 }, 00:30:38.227 "memory_domains": [ 00:30:38.227 { 00:30:38.227 "dma_device_id": "system", 00:30:38.227 "dma_device_type": 1 00:30:38.227 }, 00:30:38.227 { 00:30:38.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.227 "dma_device_type": 2 00:30:38.227 } 00:30:38.227 ], 00:30:38.227 "driver_specific": {} 00:30:38.227 } 00:30:38.227 ] 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.227 BaseBdev4 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.227 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.228 [ 00:30:38.228 { 00:30:38.228 "name": "BaseBdev4", 00:30:38.228 "aliases": [ 00:30:38.228 "7e8bd306-3e36-403c-933c-42c6cd1d707d" 00:30:38.228 ], 00:30:38.228 "product_name": "Malloc disk", 00:30:38.228 "block_size": 512, 00:30:38.228 "num_blocks": 65536, 00:30:38.228 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:38.228 "assigned_rate_limits": { 00:30:38.228 "rw_ios_per_sec": 0, 00:30:38.228 "rw_mbytes_per_sec": 0, 00:30:38.228 "r_mbytes_per_sec": 0, 00:30:38.228 "w_mbytes_per_sec": 0 00:30:38.228 }, 00:30:38.228 "claimed": false, 00:30:38.228 "zoned": false, 00:30:38.228 "supported_io_types": { 00:30:38.228 "read": true, 00:30:38.228 "write": true, 00:30:38.228 "unmap": true, 00:30:38.228 "flush": true, 00:30:38.228 "reset": true, 00:30:38.228 "nvme_admin": false, 00:30:38.228 "nvme_io": false, 00:30:38.228 "nvme_io_md": false, 00:30:38.228 "write_zeroes": true, 00:30:38.228 "zcopy": true, 00:30:38.228 "get_zone_info": false, 00:30:38.228 "zone_management": false, 00:30:38.228 "zone_append": false, 00:30:38.228 "compare": false, 00:30:38.228 "compare_and_write": false, 00:30:38.228 "abort": true, 00:30:38.228 "seek_hole": false, 00:30:38.228 "seek_data": false, 00:30:38.228 "copy": true, 00:30:38.228 "nvme_iov_md": false 00:30:38.228 }, 00:30:38.228 "memory_domains": [ 00:30:38.228 { 00:30:38.228 "dma_device_id": "system", 00:30:38.228 "dma_device_type": 1 00:30:38.228 }, 00:30:38.228 { 00:30:38.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:38.228 "dma_device_type": 2 00:30:38.228 } 00:30:38.228 ], 00:30:38.228 "driver_specific": {} 00:30:38.228 } 00:30:38.228 ] 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.228 [2024-12-06 18:29:09.123120] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:38.228 [2024-12-06 18:29:09.123186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:38.228 [2024-12-06 18:29:09.123212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:38.228 [2024-12-06 18:29:09.125481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:38.228 [2024-12-06 18:29:09.125533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.228 "name": "Existed_Raid", 00:30:38.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.228 "strip_size_kb": 0, 00:30:38.228 "state": "configuring", 00:30:38.228 "raid_level": "raid1", 00:30:38.228 "superblock": false, 00:30:38.228 "num_base_bdevs": 4, 00:30:38.228 "num_base_bdevs_discovered": 3, 00:30:38.228 "num_base_bdevs_operational": 4, 00:30:38.228 "base_bdevs_list": [ 00:30:38.228 { 00:30:38.228 "name": "BaseBdev1", 00:30:38.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.228 "is_configured": false, 00:30:38.228 "data_offset": 0, 00:30:38.228 "data_size": 0 00:30:38.228 }, 00:30:38.228 { 00:30:38.228 "name": "BaseBdev2", 00:30:38.228 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:38.228 "is_configured": true, 00:30:38.228 "data_offset": 0, 00:30:38.228 "data_size": 65536 00:30:38.228 }, 00:30:38.228 { 00:30:38.228 "name": "BaseBdev3", 00:30:38.228 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:38.228 "is_configured": true, 00:30:38.228 "data_offset": 0, 00:30:38.228 "data_size": 65536 00:30:38.228 }, 00:30:38.228 { 00:30:38.228 "name": "BaseBdev4", 00:30:38.228 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:38.228 "is_configured": true, 00:30:38.228 "data_offset": 0, 00:30:38.228 "data_size": 65536 00:30:38.228 } 00:30:38.228 ] 00:30:38.228 }' 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.228 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.796 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.797 [2024-12-06 18:29:09.502609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:38.797 "name": "Existed_Raid", 00:30:38.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.797 "strip_size_kb": 0, 00:30:38.797 "state": "configuring", 00:30:38.797 "raid_level": "raid1", 00:30:38.797 "superblock": false, 00:30:38.797 "num_base_bdevs": 4, 00:30:38.797 "num_base_bdevs_discovered": 2, 00:30:38.797 "num_base_bdevs_operational": 4, 00:30:38.797 "base_bdevs_list": [ 00:30:38.797 { 00:30:38.797 "name": "BaseBdev1", 00:30:38.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:38.797 "is_configured": false, 00:30:38.797 "data_offset": 0, 00:30:38.797 "data_size": 0 00:30:38.797 }, 00:30:38.797 { 00:30:38.797 "name": null, 00:30:38.797 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:38.797 "is_configured": false, 00:30:38.797 "data_offset": 0, 00:30:38.797 "data_size": 65536 00:30:38.797 }, 00:30:38.797 { 00:30:38.797 "name": "BaseBdev3", 00:30:38.797 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:38.797 "is_configured": true, 00:30:38.797 "data_offset": 0, 00:30:38.797 "data_size": 65536 00:30:38.797 }, 00:30:38.797 { 00:30:38.797 "name": "BaseBdev4", 00:30:38.797 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:38.797 "is_configured": true, 00:30:38.797 "data_offset": 0, 00:30:38.797 "data_size": 65536 00:30:38.797 } 00:30:38.797 ] 00:30:38.797 }' 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:38.797 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.056 [2024-12-06 18:29:09.965454] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:39.056 BaseBdev1 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.056 [ 00:30:39.056 { 00:30:39.056 "name": "BaseBdev1", 00:30:39.056 "aliases": [ 00:30:39.056 "134d3b25-0446-41ce-9d67-883babac6df6" 00:30:39.056 ], 00:30:39.056 "product_name": "Malloc disk", 00:30:39.056 "block_size": 512, 00:30:39.056 "num_blocks": 65536, 00:30:39.056 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:39.056 "assigned_rate_limits": { 00:30:39.056 "rw_ios_per_sec": 0, 00:30:39.056 "rw_mbytes_per_sec": 0, 00:30:39.056 "r_mbytes_per_sec": 0, 00:30:39.056 "w_mbytes_per_sec": 0 00:30:39.056 }, 00:30:39.056 "claimed": true, 00:30:39.056 "claim_type": "exclusive_write", 00:30:39.056 "zoned": false, 00:30:39.056 "supported_io_types": { 00:30:39.056 "read": true, 00:30:39.056 "write": true, 00:30:39.056 "unmap": true, 00:30:39.056 "flush": true, 00:30:39.056 "reset": true, 00:30:39.056 "nvme_admin": false, 00:30:39.056 "nvme_io": false, 00:30:39.056 "nvme_io_md": false, 00:30:39.056 "write_zeroes": true, 00:30:39.056 "zcopy": true, 00:30:39.056 "get_zone_info": false, 00:30:39.056 "zone_management": false, 00:30:39.056 "zone_append": false, 00:30:39.056 "compare": false, 00:30:39.056 "compare_and_write": false, 00:30:39.056 "abort": true, 00:30:39.056 "seek_hole": false, 00:30:39.056 "seek_data": false, 00:30:39.056 "copy": true, 00:30:39.056 "nvme_iov_md": false 00:30:39.056 }, 00:30:39.056 "memory_domains": [ 00:30:39.056 { 00:30:39.056 "dma_device_id": "system", 00:30:39.056 "dma_device_type": 1 00:30:39.056 }, 00:30:39.056 { 00:30:39.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:39.056 "dma_device_type": 2 00:30:39.056 } 00:30:39.056 ], 00:30:39.056 "driver_specific": {} 00:30:39.056 } 00:30:39.056 ] 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:39.056 18:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:39.056 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:39.316 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.316 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.316 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.316 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.316 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.316 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:39.316 "name": "Existed_Raid", 00:30:39.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.316 "strip_size_kb": 0, 00:30:39.316 "state": "configuring", 00:30:39.316 "raid_level": "raid1", 00:30:39.316 "superblock": false, 00:30:39.316 "num_base_bdevs": 4, 00:30:39.316 "num_base_bdevs_discovered": 3, 00:30:39.316 "num_base_bdevs_operational": 4, 00:30:39.316 "base_bdevs_list": [ 00:30:39.316 { 00:30:39.316 "name": "BaseBdev1", 00:30:39.316 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:39.316 "is_configured": true, 00:30:39.316 "data_offset": 0, 00:30:39.316 "data_size": 65536 00:30:39.316 }, 00:30:39.316 { 00:30:39.316 "name": null, 00:30:39.316 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:39.316 "is_configured": false, 00:30:39.316 "data_offset": 0, 00:30:39.316 "data_size": 65536 00:30:39.316 }, 00:30:39.316 { 00:30:39.316 "name": "BaseBdev3", 00:30:39.316 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:39.316 "is_configured": true, 00:30:39.316 "data_offset": 0, 00:30:39.316 "data_size": 65536 00:30:39.316 }, 00:30:39.316 { 00:30:39.316 "name": "BaseBdev4", 00:30:39.316 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:39.316 "is_configured": true, 00:30:39.316 "data_offset": 0, 00:30:39.316 "data_size": 65536 00:30:39.316 } 00:30:39.316 ] 00:30:39.316 }' 00:30:39.316 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:39.316 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 [2024-12-06 18:29:10.469305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:39.575 "name": "Existed_Raid", 00:30:39.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:39.575 "strip_size_kb": 0, 00:30:39.575 "state": "configuring", 00:30:39.575 "raid_level": "raid1", 00:30:39.575 "superblock": false, 00:30:39.575 "num_base_bdevs": 4, 00:30:39.575 "num_base_bdevs_discovered": 2, 00:30:39.575 "num_base_bdevs_operational": 4, 00:30:39.575 "base_bdevs_list": [ 00:30:39.575 { 00:30:39.575 "name": "BaseBdev1", 00:30:39.575 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:39.575 "is_configured": true, 00:30:39.575 "data_offset": 0, 00:30:39.575 "data_size": 65536 00:30:39.575 }, 00:30:39.575 { 00:30:39.575 "name": null, 00:30:39.575 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:39.575 "is_configured": false, 00:30:39.575 "data_offset": 0, 00:30:39.575 "data_size": 65536 00:30:39.575 }, 00:30:39.575 { 00:30:39.575 "name": null, 00:30:39.575 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:39.575 "is_configured": false, 00:30:39.575 "data_offset": 0, 00:30:39.575 "data_size": 65536 00:30:39.575 }, 00:30:39.575 { 00:30:39.575 "name": "BaseBdev4", 00:30:39.575 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:39.575 "is_configured": true, 00:30:39.575 "data_offset": 0, 00:30:39.575 "data_size": 65536 00:30:39.575 } 00:30:39.575 ] 00:30:39.575 }' 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:39.575 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:40.143 [2024-12-06 18:29:10.917230] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.143 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.143 "name": "Existed_Raid", 00:30:40.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.143 "strip_size_kb": 0, 00:30:40.143 "state": "configuring", 00:30:40.143 "raid_level": "raid1", 00:30:40.143 "superblock": false, 00:30:40.143 "num_base_bdevs": 4, 00:30:40.143 "num_base_bdevs_discovered": 3, 00:30:40.143 "num_base_bdevs_operational": 4, 00:30:40.143 "base_bdevs_list": [ 00:30:40.143 { 00:30:40.143 "name": "BaseBdev1", 00:30:40.143 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:40.143 "is_configured": true, 00:30:40.143 "data_offset": 0, 00:30:40.143 "data_size": 65536 00:30:40.143 }, 00:30:40.143 { 00:30:40.143 "name": null, 00:30:40.143 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:40.143 "is_configured": false, 00:30:40.143 "data_offset": 0, 00:30:40.143 "data_size": 65536 00:30:40.143 }, 00:30:40.143 { 00:30:40.143 "name": "BaseBdev3", 00:30:40.143 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:40.143 "is_configured": true, 00:30:40.143 "data_offset": 0, 00:30:40.143 "data_size": 65536 00:30:40.143 }, 00:30:40.143 { 00:30:40.143 "name": "BaseBdev4", 00:30:40.143 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:40.143 "is_configured": true, 00:30:40.144 "data_offset": 0, 00:30:40.144 "data_size": 65536 00:30:40.144 } 00:30:40.144 ] 00:30:40.144 }' 00:30:40.144 18:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.144 18:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:40.709 [2024-12-06 18:29:11.408557] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:40.709 "name": "Existed_Raid", 00:30:40.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:40.709 "strip_size_kb": 0, 00:30:40.709 "state": "configuring", 00:30:40.709 "raid_level": "raid1", 00:30:40.709 "superblock": false, 00:30:40.709 "num_base_bdevs": 4, 00:30:40.709 "num_base_bdevs_discovered": 2, 00:30:40.709 "num_base_bdevs_operational": 4, 00:30:40.709 "base_bdevs_list": [ 00:30:40.709 { 00:30:40.709 "name": null, 00:30:40.709 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:40.709 "is_configured": false, 00:30:40.709 "data_offset": 0, 00:30:40.709 "data_size": 65536 00:30:40.709 }, 00:30:40.709 { 00:30:40.709 "name": null, 00:30:40.709 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:40.709 "is_configured": false, 00:30:40.709 "data_offset": 0, 00:30:40.709 "data_size": 65536 00:30:40.709 }, 00:30:40.709 { 00:30:40.709 "name": "BaseBdev3", 00:30:40.709 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:40.709 "is_configured": true, 00:30:40.709 "data_offset": 0, 00:30:40.709 "data_size": 65536 00:30:40.709 }, 00:30:40.709 { 00:30:40.709 "name": "BaseBdev4", 00:30:40.709 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:40.709 "is_configured": true, 00:30:40.709 "data_offset": 0, 00:30:40.709 "data_size": 65536 00:30:40.709 } 00:30:40.709 ] 00:30:40.709 }' 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:40.709 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.275 [2024-12-06 18:29:11.955245] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.275 18:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.275 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.275 "name": "Existed_Raid", 00:30:41.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:41.275 "strip_size_kb": 0, 00:30:41.275 "state": "configuring", 00:30:41.275 "raid_level": "raid1", 00:30:41.275 "superblock": false, 00:30:41.275 "num_base_bdevs": 4, 00:30:41.275 "num_base_bdevs_discovered": 3, 00:30:41.275 "num_base_bdevs_operational": 4, 00:30:41.275 "base_bdevs_list": [ 00:30:41.275 { 00:30:41.275 "name": null, 00:30:41.275 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:41.275 "is_configured": false, 00:30:41.275 "data_offset": 0, 00:30:41.275 "data_size": 65536 00:30:41.275 }, 00:30:41.275 { 00:30:41.275 "name": "BaseBdev2", 00:30:41.275 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:41.275 "is_configured": true, 00:30:41.275 "data_offset": 0, 00:30:41.275 "data_size": 65536 00:30:41.275 }, 00:30:41.275 { 00:30:41.275 "name": "BaseBdev3", 00:30:41.275 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:41.275 "is_configured": true, 00:30:41.275 "data_offset": 0, 00:30:41.275 "data_size": 65536 00:30:41.275 }, 00:30:41.275 { 00:30:41.275 "name": "BaseBdev4", 00:30:41.275 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:41.275 "is_configured": true, 00:30:41.275 "data_offset": 0, 00:30:41.275 "data_size": 65536 00:30:41.275 } 00:30:41.275 ] 00:30:41.275 }' 00:30:41.275 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.275 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:41.534 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 134d3b25-0446-41ce-9d67-883babac6df6 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.794 [2024-12-06 18:29:12.528336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:41.794 [2024-12-06 18:29:12.528381] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:41.794 [2024-12-06 18:29:12.528393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:30:41.794 [2024-12-06 18:29:12.528670] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:41.794 [2024-12-06 18:29:12.528823] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:41.794 [2024-12-06 18:29:12.528834] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:41.794 [2024-12-06 18:29:12.529064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:41.794 NewBaseBdev 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.794 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.794 [ 00:30:41.794 { 00:30:41.794 "name": "NewBaseBdev", 00:30:41.794 "aliases": [ 00:30:41.794 "134d3b25-0446-41ce-9d67-883babac6df6" 00:30:41.794 ], 00:30:41.794 "product_name": "Malloc disk", 00:30:41.794 "block_size": 512, 00:30:41.794 "num_blocks": 65536, 00:30:41.794 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:41.794 "assigned_rate_limits": { 00:30:41.794 "rw_ios_per_sec": 0, 00:30:41.794 "rw_mbytes_per_sec": 0, 00:30:41.794 "r_mbytes_per_sec": 0, 00:30:41.794 "w_mbytes_per_sec": 0 00:30:41.794 }, 00:30:41.794 "claimed": true, 00:30:41.794 "claim_type": "exclusive_write", 00:30:41.794 "zoned": false, 00:30:41.794 "supported_io_types": { 00:30:41.794 "read": true, 00:30:41.794 "write": true, 00:30:41.794 "unmap": true, 00:30:41.794 "flush": true, 00:30:41.794 "reset": true, 00:30:41.794 "nvme_admin": false, 00:30:41.794 "nvme_io": false, 00:30:41.794 "nvme_io_md": false, 00:30:41.794 "write_zeroes": true, 00:30:41.794 "zcopy": true, 00:30:41.794 "get_zone_info": false, 00:30:41.794 "zone_management": false, 00:30:41.794 "zone_append": false, 00:30:41.794 "compare": false, 00:30:41.794 "compare_and_write": false, 00:30:41.794 "abort": true, 00:30:41.794 "seek_hole": false, 00:30:41.794 "seek_data": false, 00:30:41.794 "copy": true, 00:30:41.794 "nvme_iov_md": false 00:30:41.794 }, 00:30:41.794 "memory_domains": [ 00:30:41.794 { 00:30:41.794 "dma_device_id": "system", 00:30:41.794 "dma_device_type": 1 00:30:41.794 }, 00:30:41.794 { 00:30:41.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:41.794 "dma_device_type": 2 00:30:41.794 } 00:30:41.794 ], 00:30:41.794 "driver_specific": {} 00:30:41.794 } 00:30:41.794 ] 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:41.795 "name": "Existed_Raid", 00:30:41.795 "uuid": "f1da0c9e-b0a2-456d-a7bd-73141d0ea95b", 00:30:41.795 "strip_size_kb": 0, 00:30:41.795 "state": "online", 00:30:41.795 "raid_level": "raid1", 00:30:41.795 "superblock": false, 00:30:41.795 "num_base_bdevs": 4, 00:30:41.795 "num_base_bdevs_discovered": 4, 00:30:41.795 "num_base_bdevs_operational": 4, 00:30:41.795 "base_bdevs_list": [ 00:30:41.795 { 00:30:41.795 "name": "NewBaseBdev", 00:30:41.795 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:41.795 "is_configured": true, 00:30:41.795 "data_offset": 0, 00:30:41.795 "data_size": 65536 00:30:41.795 }, 00:30:41.795 { 00:30:41.795 "name": "BaseBdev2", 00:30:41.795 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:41.795 "is_configured": true, 00:30:41.795 "data_offset": 0, 00:30:41.795 "data_size": 65536 00:30:41.795 }, 00:30:41.795 { 00:30:41.795 "name": "BaseBdev3", 00:30:41.795 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:41.795 "is_configured": true, 00:30:41.795 "data_offset": 0, 00:30:41.795 "data_size": 65536 00:30:41.795 }, 00:30:41.795 { 00:30:41.795 "name": "BaseBdev4", 00:30:41.795 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:41.795 "is_configured": true, 00:30:41.795 "data_offset": 0, 00:30:41.795 "data_size": 65536 00:30:41.795 } 00:30:41.795 ] 00:30:41.795 }' 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:41.795 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.054 18:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.054 [2024-12-06 18:29:12.992140] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:42.313 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.313 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:42.313 "name": "Existed_Raid", 00:30:42.313 "aliases": [ 00:30:42.313 "f1da0c9e-b0a2-456d-a7bd-73141d0ea95b" 00:30:42.313 ], 00:30:42.313 "product_name": "Raid Volume", 00:30:42.313 "block_size": 512, 00:30:42.313 "num_blocks": 65536, 00:30:42.313 "uuid": "f1da0c9e-b0a2-456d-a7bd-73141d0ea95b", 00:30:42.313 "assigned_rate_limits": { 00:30:42.313 "rw_ios_per_sec": 0, 00:30:42.313 "rw_mbytes_per_sec": 0, 00:30:42.313 "r_mbytes_per_sec": 0, 00:30:42.313 "w_mbytes_per_sec": 0 00:30:42.313 }, 00:30:42.313 "claimed": false, 00:30:42.313 "zoned": false, 00:30:42.313 "supported_io_types": { 00:30:42.313 "read": true, 00:30:42.313 "write": true, 00:30:42.313 "unmap": false, 00:30:42.313 "flush": false, 00:30:42.313 "reset": true, 00:30:42.313 "nvme_admin": false, 00:30:42.313 "nvme_io": false, 00:30:42.313 "nvme_io_md": false, 00:30:42.313 "write_zeroes": true, 00:30:42.313 "zcopy": false, 00:30:42.313 "get_zone_info": false, 00:30:42.313 "zone_management": false, 00:30:42.313 "zone_append": false, 00:30:42.313 "compare": false, 00:30:42.313 "compare_and_write": false, 00:30:42.313 "abort": false, 00:30:42.313 "seek_hole": false, 00:30:42.313 "seek_data": false, 00:30:42.313 "copy": false, 00:30:42.313 "nvme_iov_md": false 00:30:42.313 }, 00:30:42.313 "memory_domains": [ 00:30:42.313 { 00:30:42.313 "dma_device_id": "system", 00:30:42.313 "dma_device_type": 1 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.313 "dma_device_type": 2 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "dma_device_id": "system", 00:30:42.313 "dma_device_type": 1 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.313 "dma_device_type": 2 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "dma_device_id": "system", 00:30:42.313 "dma_device_type": 1 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.313 "dma_device_type": 2 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "dma_device_id": "system", 00:30:42.313 "dma_device_type": 1 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:42.313 "dma_device_type": 2 00:30:42.313 } 00:30:42.313 ], 00:30:42.313 "driver_specific": { 00:30:42.313 "raid": { 00:30:42.313 "uuid": "f1da0c9e-b0a2-456d-a7bd-73141d0ea95b", 00:30:42.313 "strip_size_kb": 0, 00:30:42.313 "state": "online", 00:30:42.313 "raid_level": "raid1", 00:30:42.313 "superblock": false, 00:30:42.313 "num_base_bdevs": 4, 00:30:42.313 "num_base_bdevs_discovered": 4, 00:30:42.313 "num_base_bdevs_operational": 4, 00:30:42.313 "base_bdevs_list": [ 00:30:42.313 { 00:30:42.313 "name": "NewBaseBdev", 00:30:42.313 "uuid": "134d3b25-0446-41ce-9d67-883babac6df6", 00:30:42.313 "is_configured": true, 00:30:42.313 "data_offset": 0, 00:30:42.313 "data_size": 65536 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "name": "BaseBdev2", 00:30:42.313 "uuid": "fae24111-e0c9-4275-816b-78f08be3e3b3", 00:30:42.313 "is_configured": true, 00:30:42.313 "data_offset": 0, 00:30:42.313 "data_size": 65536 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "name": "BaseBdev3", 00:30:42.313 "uuid": "1e21b02d-e9c8-4050-9720-064cff5f76f3", 00:30:42.313 "is_configured": true, 00:30:42.313 "data_offset": 0, 00:30:42.313 "data_size": 65536 00:30:42.313 }, 00:30:42.313 { 00:30:42.313 "name": "BaseBdev4", 00:30:42.313 "uuid": "7e8bd306-3e36-403c-933c-42c6cd1d707d", 00:30:42.313 "is_configured": true, 00:30:42.313 "data_offset": 0, 00:30:42.313 "data_size": 65536 00:30:42.313 } 00:30:42.313 ] 00:30:42.313 } 00:30:42.313 } 00:30:42.313 }' 00:30:42.313 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:42.313 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:42.313 BaseBdev2 00:30:42.313 BaseBdev3 00:30:42.313 BaseBdev4' 00:30:42.313 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.313 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.314 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:42.572 [2024-12-06 18:29:13.315269] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:42.572 [2024-12-06 18:29:13.315298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:42.572 [2024-12-06 18:29:13.315375] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:42.572 [2024-12-06 18:29:13.315659] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:42.572 [2024-12-06 18:29:13.315674] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72902 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72902 ']' 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72902 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72902 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:42.572 killing process with pid 72902 00:30:42.572 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:42.573 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72902' 00:30:42.573 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72902 00:30:42.573 [2024-12-06 18:29:13.366771] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:42.573 18:29:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72902 00:30:42.831 [2024-12-06 18:29:13.772424] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:44.210 18:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:30:44.210 ************************************ 00:30:44.210 END TEST raid_state_function_test 00:30:44.210 ************************************ 00:30:44.210 00:30:44.210 real 0m11.302s 00:30:44.210 user 0m17.745s 00:30:44.210 sys 0m2.349s 00:30:44.210 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.210 18:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:30:44.210 18:29:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:30:44.210 18:29:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:30:44.210 18:29:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.210 18:29:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:44.210 ************************************ 00:30:44.210 START TEST raid_state_function_test_sb 00:30:44.210 ************************************ 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:30:44.210 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:30:44.211 Process raid pid: 73573 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73573 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73573' 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73573 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73573 ']' 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.211 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:44.211 [2024-12-06 18:29:15.113194] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:44.211 [2024-12-06 18:29:15.113323] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.470 [2024-12-06 18:29:15.288073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.470 [2024-12-06 18:29:15.400442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:44.767 [2024-12-06 18:29:15.611501] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:44.767 [2024-12-06 18:29:15.611546] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.026 [2024-12-06 18:29:15.952334] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:45.026 [2024-12-06 18:29:15.952530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:45.026 [2024-12-06 18:29:15.952554] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:45.026 [2024-12-06 18:29:15.952569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:45.026 [2024-12-06 18:29:15.952577] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:45.026 [2024-12-06 18:29:15.952589] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:45.026 [2024-12-06 18:29:15.952596] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:45.026 [2024-12-06 18:29:15.952608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.026 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.285 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.285 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:45.285 "name": "Existed_Raid", 00:30:45.285 "uuid": "e386f652-bf6c-4292-a77f-46b7e4154778", 00:30:45.285 "strip_size_kb": 0, 00:30:45.285 "state": "configuring", 00:30:45.285 "raid_level": "raid1", 00:30:45.285 "superblock": true, 00:30:45.285 "num_base_bdevs": 4, 00:30:45.285 "num_base_bdevs_discovered": 0, 00:30:45.285 "num_base_bdevs_operational": 4, 00:30:45.285 "base_bdevs_list": [ 00:30:45.285 { 00:30:45.285 "name": "BaseBdev1", 00:30:45.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.286 "is_configured": false, 00:30:45.286 "data_offset": 0, 00:30:45.286 "data_size": 0 00:30:45.286 }, 00:30:45.286 { 00:30:45.286 "name": "BaseBdev2", 00:30:45.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.286 "is_configured": false, 00:30:45.286 "data_offset": 0, 00:30:45.286 "data_size": 0 00:30:45.286 }, 00:30:45.286 { 00:30:45.286 "name": "BaseBdev3", 00:30:45.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.286 "is_configured": false, 00:30:45.286 "data_offset": 0, 00:30:45.286 "data_size": 0 00:30:45.286 }, 00:30:45.286 { 00:30:45.286 "name": "BaseBdev4", 00:30:45.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.286 "is_configured": false, 00:30:45.286 "data_offset": 0, 00:30:45.286 "data_size": 0 00:30:45.286 } 00:30:45.286 ] 00:30:45.286 }' 00:30:45.286 18:29:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:45.286 18:29:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.545 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:45.545 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.545 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.545 [2024-12-06 18:29:16.363711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:45.545 [2024-12-06 18:29:16.363753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:30:45.545 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.545 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:45.545 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.545 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.545 [2024-12-06 18:29:16.375688] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:45.545 [2024-12-06 18:29:16.375737] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:45.545 [2024-12-06 18:29:16.375747] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:45.546 [2024-12-06 18:29:16.375760] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:45.546 [2024-12-06 18:29:16.375768] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:45.546 [2024-12-06 18:29:16.375780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:45.546 [2024-12-06 18:29:16.375787] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:45.546 [2024-12-06 18:29:16.375799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.546 [2024-12-06 18:29:16.425621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:45.546 BaseBdev1 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.546 [ 00:30:45.546 { 00:30:45.546 "name": "BaseBdev1", 00:30:45.546 "aliases": [ 00:30:45.546 "44a4d0f8-160e-4897-9822-a14c04f5843c" 00:30:45.546 ], 00:30:45.546 "product_name": "Malloc disk", 00:30:45.546 "block_size": 512, 00:30:45.546 "num_blocks": 65536, 00:30:45.546 "uuid": "44a4d0f8-160e-4897-9822-a14c04f5843c", 00:30:45.546 "assigned_rate_limits": { 00:30:45.546 "rw_ios_per_sec": 0, 00:30:45.546 "rw_mbytes_per_sec": 0, 00:30:45.546 "r_mbytes_per_sec": 0, 00:30:45.546 "w_mbytes_per_sec": 0 00:30:45.546 }, 00:30:45.546 "claimed": true, 00:30:45.546 "claim_type": "exclusive_write", 00:30:45.546 "zoned": false, 00:30:45.546 "supported_io_types": { 00:30:45.546 "read": true, 00:30:45.546 "write": true, 00:30:45.546 "unmap": true, 00:30:45.546 "flush": true, 00:30:45.546 "reset": true, 00:30:45.546 "nvme_admin": false, 00:30:45.546 "nvme_io": false, 00:30:45.546 "nvme_io_md": false, 00:30:45.546 "write_zeroes": true, 00:30:45.546 "zcopy": true, 00:30:45.546 "get_zone_info": false, 00:30:45.546 "zone_management": false, 00:30:45.546 "zone_append": false, 00:30:45.546 "compare": false, 00:30:45.546 "compare_and_write": false, 00:30:45.546 "abort": true, 00:30:45.546 "seek_hole": false, 00:30:45.546 "seek_data": false, 00:30:45.546 "copy": true, 00:30:45.546 "nvme_iov_md": false 00:30:45.546 }, 00:30:45.546 "memory_domains": [ 00:30:45.546 { 00:30:45.546 "dma_device_id": "system", 00:30:45.546 "dma_device_type": 1 00:30:45.546 }, 00:30:45.546 { 00:30:45.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:45.546 "dma_device_type": 2 00:30:45.546 } 00:30:45.546 ], 00:30:45.546 "driver_specific": {} 00:30:45.546 } 00:30:45.546 ] 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:45.546 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:45.806 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.806 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:45.806 "name": "Existed_Raid", 00:30:45.806 "uuid": "41f17c64-e29f-469f-a6c1-031b950e7d8f", 00:30:45.806 "strip_size_kb": 0, 00:30:45.806 "state": "configuring", 00:30:45.806 "raid_level": "raid1", 00:30:45.806 "superblock": true, 00:30:45.806 "num_base_bdevs": 4, 00:30:45.806 "num_base_bdevs_discovered": 1, 00:30:45.806 "num_base_bdevs_operational": 4, 00:30:45.806 "base_bdevs_list": [ 00:30:45.806 { 00:30:45.806 "name": "BaseBdev1", 00:30:45.806 "uuid": "44a4d0f8-160e-4897-9822-a14c04f5843c", 00:30:45.806 "is_configured": true, 00:30:45.806 "data_offset": 2048, 00:30:45.806 "data_size": 63488 00:30:45.806 }, 00:30:45.806 { 00:30:45.806 "name": "BaseBdev2", 00:30:45.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.806 "is_configured": false, 00:30:45.806 "data_offset": 0, 00:30:45.806 "data_size": 0 00:30:45.806 }, 00:30:45.806 { 00:30:45.806 "name": "BaseBdev3", 00:30:45.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.806 "is_configured": false, 00:30:45.806 "data_offset": 0, 00:30:45.806 "data_size": 0 00:30:45.806 }, 00:30:45.806 { 00:30:45.806 "name": "BaseBdev4", 00:30:45.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:45.806 "is_configured": false, 00:30:45.806 "data_offset": 0, 00:30:45.806 "data_size": 0 00:30:45.806 } 00:30:45.806 ] 00:30:45.806 }' 00:30:45.806 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:45.806 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.066 [2024-12-06 18:29:16.917027] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:46.066 [2024-12-06 18:29:16.917229] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.066 [2024-12-06 18:29:16.929063] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:46.066 [2024-12-06 18:29:16.931275] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:30:46.066 [2024-12-06 18:29:16.931321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:30:46.066 [2024-12-06 18:29:16.931333] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:30:46.066 [2024-12-06 18:29:16.931348] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:30:46.066 [2024-12-06 18:29:16.931356] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:30:46.066 [2024-12-06 18:29:16.931367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.066 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.067 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:46.067 "name": "Existed_Raid", 00:30:46.067 "uuid": "e07b3092-3d1f-4792-84c7-456c90686c07", 00:30:46.067 "strip_size_kb": 0, 00:30:46.067 "state": "configuring", 00:30:46.067 "raid_level": "raid1", 00:30:46.067 "superblock": true, 00:30:46.067 "num_base_bdevs": 4, 00:30:46.067 "num_base_bdevs_discovered": 1, 00:30:46.067 "num_base_bdevs_operational": 4, 00:30:46.067 "base_bdevs_list": [ 00:30:46.067 { 00:30:46.067 "name": "BaseBdev1", 00:30:46.067 "uuid": "44a4d0f8-160e-4897-9822-a14c04f5843c", 00:30:46.067 "is_configured": true, 00:30:46.067 "data_offset": 2048, 00:30:46.067 "data_size": 63488 00:30:46.067 }, 00:30:46.067 { 00:30:46.067 "name": "BaseBdev2", 00:30:46.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.067 "is_configured": false, 00:30:46.067 "data_offset": 0, 00:30:46.067 "data_size": 0 00:30:46.067 }, 00:30:46.067 { 00:30:46.067 "name": "BaseBdev3", 00:30:46.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.067 "is_configured": false, 00:30:46.067 "data_offset": 0, 00:30:46.067 "data_size": 0 00:30:46.067 }, 00:30:46.067 { 00:30:46.067 "name": "BaseBdev4", 00:30:46.067 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.067 "is_configured": false, 00:30:46.067 "data_offset": 0, 00:30:46.067 "data_size": 0 00:30:46.067 } 00:30:46.067 ] 00:30:46.067 }' 00:30:46.067 18:29:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:46.067 18:29:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.637 [2024-12-06 18:29:17.424082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:46.637 BaseBdev2 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.637 [ 00:30:46.637 { 00:30:46.637 "name": "BaseBdev2", 00:30:46.637 "aliases": [ 00:30:46.637 "36b5db82-511c-4479-87d1-a50fcb801d6e" 00:30:46.637 ], 00:30:46.637 "product_name": "Malloc disk", 00:30:46.637 "block_size": 512, 00:30:46.637 "num_blocks": 65536, 00:30:46.637 "uuid": "36b5db82-511c-4479-87d1-a50fcb801d6e", 00:30:46.637 "assigned_rate_limits": { 00:30:46.637 "rw_ios_per_sec": 0, 00:30:46.637 "rw_mbytes_per_sec": 0, 00:30:46.637 "r_mbytes_per_sec": 0, 00:30:46.637 "w_mbytes_per_sec": 0 00:30:46.637 }, 00:30:46.637 "claimed": true, 00:30:46.637 "claim_type": "exclusive_write", 00:30:46.637 "zoned": false, 00:30:46.637 "supported_io_types": { 00:30:46.637 "read": true, 00:30:46.637 "write": true, 00:30:46.637 "unmap": true, 00:30:46.637 "flush": true, 00:30:46.637 "reset": true, 00:30:46.637 "nvme_admin": false, 00:30:46.637 "nvme_io": false, 00:30:46.637 "nvme_io_md": false, 00:30:46.637 "write_zeroes": true, 00:30:46.637 "zcopy": true, 00:30:46.637 "get_zone_info": false, 00:30:46.637 "zone_management": false, 00:30:46.637 "zone_append": false, 00:30:46.637 "compare": false, 00:30:46.637 "compare_and_write": false, 00:30:46.637 "abort": true, 00:30:46.637 "seek_hole": false, 00:30:46.637 "seek_data": false, 00:30:46.637 "copy": true, 00:30:46.637 "nvme_iov_md": false 00:30:46.637 }, 00:30:46.637 "memory_domains": [ 00:30:46.637 { 00:30:46.637 "dma_device_id": "system", 00:30:46.637 "dma_device_type": 1 00:30:46.637 }, 00:30:46.637 { 00:30:46.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:46.637 "dma_device_type": 2 00:30:46.637 } 00:30:46.637 ], 00:30:46.637 "driver_specific": {} 00:30:46.637 } 00:30:46.637 ] 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:46.637 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:46.637 "name": "Existed_Raid", 00:30:46.637 "uuid": "e07b3092-3d1f-4792-84c7-456c90686c07", 00:30:46.637 "strip_size_kb": 0, 00:30:46.637 "state": "configuring", 00:30:46.637 "raid_level": "raid1", 00:30:46.637 "superblock": true, 00:30:46.637 "num_base_bdevs": 4, 00:30:46.637 "num_base_bdevs_discovered": 2, 00:30:46.637 "num_base_bdevs_operational": 4, 00:30:46.637 "base_bdevs_list": [ 00:30:46.637 { 00:30:46.637 "name": "BaseBdev1", 00:30:46.637 "uuid": "44a4d0f8-160e-4897-9822-a14c04f5843c", 00:30:46.637 "is_configured": true, 00:30:46.637 "data_offset": 2048, 00:30:46.637 "data_size": 63488 00:30:46.637 }, 00:30:46.637 { 00:30:46.637 "name": "BaseBdev2", 00:30:46.637 "uuid": "36b5db82-511c-4479-87d1-a50fcb801d6e", 00:30:46.638 "is_configured": true, 00:30:46.638 "data_offset": 2048, 00:30:46.638 "data_size": 63488 00:30:46.638 }, 00:30:46.638 { 00:30:46.638 "name": "BaseBdev3", 00:30:46.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.638 "is_configured": false, 00:30:46.638 "data_offset": 0, 00:30:46.638 "data_size": 0 00:30:46.638 }, 00:30:46.638 { 00:30:46.638 "name": "BaseBdev4", 00:30:46.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:46.638 "is_configured": false, 00:30:46.638 "data_offset": 0, 00:30:46.638 "data_size": 0 00:30:46.638 } 00:30:46.638 ] 00:30:46.638 }' 00:30:46.638 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:46.638 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.207 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:47.207 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.207 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.208 [2024-12-06 18:29:17.947721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:47.208 BaseBdev3 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.208 [ 00:30:47.208 { 00:30:47.208 "name": "BaseBdev3", 00:30:47.208 "aliases": [ 00:30:47.208 "c2b38cc3-2338-4b82-b47a-e8405386860e" 00:30:47.208 ], 00:30:47.208 "product_name": "Malloc disk", 00:30:47.208 "block_size": 512, 00:30:47.208 "num_blocks": 65536, 00:30:47.208 "uuid": "c2b38cc3-2338-4b82-b47a-e8405386860e", 00:30:47.208 "assigned_rate_limits": { 00:30:47.208 "rw_ios_per_sec": 0, 00:30:47.208 "rw_mbytes_per_sec": 0, 00:30:47.208 "r_mbytes_per_sec": 0, 00:30:47.208 "w_mbytes_per_sec": 0 00:30:47.208 }, 00:30:47.208 "claimed": true, 00:30:47.208 "claim_type": "exclusive_write", 00:30:47.208 "zoned": false, 00:30:47.208 "supported_io_types": { 00:30:47.208 "read": true, 00:30:47.208 "write": true, 00:30:47.208 "unmap": true, 00:30:47.208 "flush": true, 00:30:47.208 "reset": true, 00:30:47.208 "nvme_admin": false, 00:30:47.208 "nvme_io": false, 00:30:47.208 "nvme_io_md": false, 00:30:47.208 "write_zeroes": true, 00:30:47.208 "zcopy": true, 00:30:47.208 "get_zone_info": false, 00:30:47.208 "zone_management": false, 00:30:47.208 "zone_append": false, 00:30:47.208 "compare": false, 00:30:47.208 "compare_and_write": false, 00:30:47.208 "abort": true, 00:30:47.208 "seek_hole": false, 00:30:47.208 "seek_data": false, 00:30:47.208 "copy": true, 00:30:47.208 "nvme_iov_md": false 00:30:47.208 }, 00:30:47.208 "memory_domains": [ 00:30:47.208 { 00:30:47.208 "dma_device_id": "system", 00:30:47.208 "dma_device_type": 1 00:30:47.208 }, 00:30:47.208 { 00:30:47.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:47.208 "dma_device_type": 2 00:30:47.208 } 00:30:47.208 ], 00:30:47.208 "driver_specific": {} 00:30:47.208 } 00:30:47.208 ] 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.208 18:29:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.208 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.208 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.208 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.208 "name": "Existed_Raid", 00:30:47.208 "uuid": "e07b3092-3d1f-4792-84c7-456c90686c07", 00:30:47.208 "strip_size_kb": 0, 00:30:47.208 "state": "configuring", 00:30:47.208 "raid_level": "raid1", 00:30:47.208 "superblock": true, 00:30:47.208 "num_base_bdevs": 4, 00:30:47.208 "num_base_bdevs_discovered": 3, 00:30:47.208 "num_base_bdevs_operational": 4, 00:30:47.208 "base_bdevs_list": [ 00:30:47.208 { 00:30:47.208 "name": "BaseBdev1", 00:30:47.208 "uuid": "44a4d0f8-160e-4897-9822-a14c04f5843c", 00:30:47.208 "is_configured": true, 00:30:47.208 "data_offset": 2048, 00:30:47.208 "data_size": 63488 00:30:47.208 }, 00:30:47.208 { 00:30:47.208 "name": "BaseBdev2", 00:30:47.208 "uuid": "36b5db82-511c-4479-87d1-a50fcb801d6e", 00:30:47.208 "is_configured": true, 00:30:47.208 "data_offset": 2048, 00:30:47.208 "data_size": 63488 00:30:47.208 }, 00:30:47.208 { 00:30:47.208 "name": "BaseBdev3", 00:30:47.208 "uuid": "c2b38cc3-2338-4b82-b47a-e8405386860e", 00:30:47.208 "is_configured": true, 00:30:47.208 "data_offset": 2048, 00:30:47.208 "data_size": 63488 00:30:47.208 }, 00:30:47.208 { 00:30:47.208 "name": "BaseBdev4", 00:30:47.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:47.208 "is_configured": false, 00:30:47.208 "data_offset": 0, 00:30:47.208 "data_size": 0 00:30:47.208 } 00:30:47.208 ] 00:30:47.208 }' 00:30:47.208 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.208 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.467 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:47.468 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.468 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.728 [2024-12-06 18:29:18.430527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:47.728 [2024-12-06 18:29:18.430794] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:47.728 [2024-12-06 18:29:18.430814] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:47.728 [2024-12-06 18:29:18.431097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:47.728 BaseBdev4 00:30:47.728 [2024-12-06 18:29:18.431285] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:47.728 [2024-12-06 18:29:18.431301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:30:47.728 [2024-12-06 18:29:18.431449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.728 [ 00:30:47.728 { 00:30:47.728 "name": "BaseBdev4", 00:30:47.728 "aliases": [ 00:30:47.728 "d4722d96-fdfe-4b23-931e-4f8d090ed2fa" 00:30:47.728 ], 00:30:47.728 "product_name": "Malloc disk", 00:30:47.728 "block_size": 512, 00:30:47.728 "num_blocks": 65536, 00:30:47.728 "uuid": "d4722d96-fdfe-4b23-931e-4f8d090ed2fa", 00:30:47.728 "assigned_rate_limits": { 00:30:47.728 "rw_ios_per_sec": 0, 00:30:47.728 "rw_mbytes_per_sec": 0, 00:30:47.728 "r_mbytes_per_sec": 0, 00:30:47.728 "w_mbytes_per_sec": 0 00:30:47.728 }, 00:30:47.728 "claimed": true, 00:30:47.728 "claim_type": "exclusive_write", 00:30:47.728 "zoned": false, 00:30:47.728 "supported_io_types": { 00:30:47.728 "read": true, 00:30:47.728 "write": true, 00:30:47.728 "unmap": true, 00:30:47.728 "flush": true, 00:30:47.728 "reset": true, 00:30:47.728 "nvme_admin": false, 00:30:47.728 "nvme_io": false, 00:30:47.728 "nvme_io_md": false, 00:30:47.728 "write_zeroes": true, 00:30:47.728 "zcopy": true, 00:30:47.728 "get_zone_info": false, 00:30:47.728 "zone_management": false, 00:30:47.728 "zone_append": false, 00:30:47.728 "compare": false, 00:30:47.728 "compare_and_write": false, 00:30:47.728 "abort": true, 00:30:47.728 "seek_hole": false, 00:30:47.728 "seek_data": false, 00:30:47.728 "copy": true, 00:30:47.728 "nvme_iov_md": false 00:30:47.728 }, 00:30:47.728 "memory_domains": [ 00:30:47.728 { 00:30:47.728 "dma_device_id": "system", 00:30:47.728 "dma_device_type": 1 00:30:47.728 }, 00:30:47.728 { 00:30:47.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:47.728 "dma_device_type": 2 00:30:47.728 } 00:30:47.728 ], 00:30:47.728 "driver_specific": {} 00:30:47.728 } 00:30:47.728 ] 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:47.728 "name": "Existed_Raid", 00:30:47.728 "uuid": "e07b3092-3d1f-4792-84c7-456c90686c07", 00:30:47.728 "strip_size_kb": 0, 00:30:47.728 "state": "online", 00:30:47.728 "raid_level": "raid1", 00:30:47.728 "superblock": true, 00:30:47.728 "num_base_bdevs": 4, 00:30:47.728 "num_base_bdevs_discovered": 4, 00:30:47.728 "num_base_bdevs_operational": 4, 00:30:47.728 "base_bdevs_list": [ 00:30:47.728 { 00:30:47.728 "name": "BaseBdev1", 00:30:47.728 "uuid": "44a4d0f8-160e-4897-9822-a14c04f5843c", 00:30:47.728 "is_configured": true, 00:30:47.728 "data_offset": 2048, 00:30:47.728 "data_size": 63488 00:30:47.728 }, 00:30:47.728 { 00:30:47.728 "name": "BaseBdev2", 00:30:47.728 "uuid": "36b5db82-511c-4479-87d1-a50fcb801d6e", 00:30:47.728 "is_configured": true, 00:30:47.728 "data_offset": 2048, 00:30:47.728 "data_size": 63488 00:30:47.728 }, 00:30:47.728 { 00:30:47.728 "name": "BaseBdev3", 00:30:47.728 "uuid": "c2b38cc3-2338-4b82-b47a-e8405386860e", 00:30:47.728 "is_configured": true, 00:30:47.728 "data_offset": 2048, 00:30:47.728 "data_size": 63488 00:30:47.728 }, 00:30:47.728 { 00:30:47.728 "name": "BaseBdev4", 00:30:47.728 "uuid": "d4722d96-fdfe-4b23-931e-4f8d090ed2fa", 00:30:47.728 "is_configured": true, 00:30:47.728 "data_offset": 2048, 00:30:47.728 "data_size": 63488 00:30:47.728 } 00:30:47.728 ] 00:30:47.728 }' 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:47.728 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.988 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:47.988 [2024-12-06 18:29:18.926285] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:48.248 18:29:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.248 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:48.248 "name": "Existed_Raid", 00:30:48.248 "aliases": [ 00:30:48.248 "e07b3092-3d1f-4792-84c7-456c90686c07" 00:30:48.248 ], 00:30:48.248 "product_name": "Raid Volume", 00:30:48.248 "block_size": 512, 00:30:48.248 "num_blocks": 63488, 00:30:48.248 "uuid": "e07b3092-3d1f-4792-84c7-456c90686c07", 00:30:48.248 "assigned_rate_limits": { 00:30:48.248 "rw_ios_per_sec": 0, 00:30:48.248 "rw_mbytes_per_sec": 0, 00:30:48.248 "r_mbytes_per_sec": 0, 00:30:48.248 "w_mbytes_per_sec": 0 00:30:48.248 }, 00:30:48.248 "claimed": false, 00:30:48.248 "zoned": false, 00:30:48.248 "supported_io_types": { 00:30:48.248 "read": true, 00:30:48.248 "write": true, 00:30:48.248 "unmap": false, 00:30:48.248 "flush": false, 00:30:48.248 "reset": true, 00:30:48.248 "nvme_admin": false, 00:30:48.248 "nvme_io": false, 00:30:48.248 "nvme_io_md": false, 00:30:48.248 "write_zeroes": true, 00:30:48.248 "zcopy": false, 00:30:48.248 "get_zone_info": false, 00:30:48.248 "zone_management": false, 00:30:48.248 "zone_append": false, 00:30:48.248 "compare": false, 00:30:48.248 "compare_and_write": false, 00:30:48.248 "abort": false, 00:30:48.248 "seek_hole": false, 00:30:48.248 "seek_data": false, 00:30:48.248 "copy": false, 00:30:48.248 "nvme_iov_md": false 00:30:48.248 }, 00:30:48.248 "memory_domains": [ 00:30:48.248 { 00:30:48.248 "dma_device_id": "system", 00:30:48.248 "dma_device_type": 1 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.248 "dma_device_type": 2 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "dma_device_id": "system", 00:30:48.248 "dma_device_type": 1 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.248 "dma_device_type": 2 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "dma_device_id": "system", 00:30:48.248 "dma_device_type": 1 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.248 "dma_device_type": 2 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "dma_device_id": "system", 00:30:48.248 "dma_device_type": 1 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:48.248 "dma_device_type": 2 00:30:48.248 } 00:30:48.248 ], 00:30:48.248 "driver_specific": { 00:30:48.248 "raid": { 00:30:48.248 "uuid": "e07b3092-3d1f-4792-84c7-456c90686c07", 00:30:48.248 "strip_size_kb": 0, 00:30:48.248 "state": "online", 00:30:48.248 "raid_level": "raid1", 00:30:48.248 "superblock": true, 00:30:48.248 "num_base_bdevs": 4, 00:30:48.248 "num_base_bdevs_discovered": 4, 00:30:48.248 "num_base_bdevs_operational": 4, 00:30:48.248 "base_bdevs_list": [ 00:30:48.248 { 00:30:48.248 "name": "BaseBdev1", 00:30:48.248 "uuid": "44a4d0f8-160e-4897-9822-a14c04f5843c", 00:30:48.248 "is_configured": true, 00:30:48.248 "data_offset": 2048, 00:30:48.248 "data_size": 63488 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "name": "BaseBdev2", 00:30:48.248 "uuid": "36b5db82-511c-4479-87d1-a50fcb801d6e", 00:30:48.248 "is_configured": true, 00:30:48.248 "data_offset": 2048, 00:30:48.248 "data_size": 63488 00:30:48.248 }, 00:30:48.248 { 00:30:48.248 "name": "BaseBdev3", 00:30:48.248 "uuid": "c2b38cc3-2338-4b82-b47a-e8405386860e", 00:30:48.248 "is_configured": true, 00:30:48.248 "data_offset": 2048, 00:30:48.248 "data_size": 63488 00:30:48.248 }, 00:30:48.249 { 00:30:48.249 "name": "BaseBdev4", 00:30:48.249 "uuid": "d4722d96-fdfe-4b23-931e-4f8d090ed2fa", 00:30:48.249 "is_configured": true, 00:30:48.249 "data_offset": 2048, 00:30:48.249 "data_size": 63488 00:30:48.249 } 00:30:48.249 ] 00:30:48.249 } 00:30:48.249 } 00:30:48.249 }' 00:30:48.249 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:48.249 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:30:48.249 BaseBdev2 00:30:48.249 BaseBdev3 00:30:48.249 BaseBdev4' 00:30:48.249 18:29:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.249 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.509 [2024-12-06 18:29:19.245653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:48.509 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:48.510 "name": "Existed_Raid", 00:30:48.510 "uuid": "e07b3092-3d1f-4792-84c7-456c90686c07", 00:30:48.510 "strip_size_kb": 0, 00:30:48.510 "state": "online", 00:30:48.510 "raid_level": "raid1", 00:30:48.510 "superblock": true, 00:30:48.510 "num_base_bdevs": 4, 00:30:48.510 "num_base_bdevs_discovered": 3, 00:30:48.510 "num_base_bdevs_operational": 3, 00:30:48.510 "base_bdevs_list": [ 00:30:48.510 { 00:30:48.510 "name": null, 00:30:48.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:48.510 "is_configured": false, 00:30:48.510 "data_offset": 0, 00:30:48.510 "data_size": 63488 00:30:48.510 }, 00:30:48.510 { 00:30:48.510 "name": "BaseBdev2", 00:30:48.510 "uuid": "36b5db82-511c-4479-87d1-a50fcb801d6e", 00:30:48.510 "is_configured": true, 00:30:48.510 "data_offset": 2048, 00:30:48.510 "data_size": 63488 00:30:48.510 }, 00:30:48.510 { 00:30:48.510 "name": "BaseBdev3", 00:30:48.510 "uuid": "c2b38cc3-2338-4b82-b47a-e8405386860e", 00:30:48.510 "is_configured": true, 00:30:48.510 "data_offset": 2048, 00:30:48.510 "data_size": 63488 00:30:48.510 }, 00:30:48.510 { 00:30:48.510 "name": "BaseBdev4", 00:30:48.510 "uuid": "d4722d96-fdfe-4b23-931e-4f8d090ed2fa", 00:30:48.510 "is_configured": true, 00:30:48.510 "data_offset": 2048, 00:30:48.510 "data_size": 63488 00:30:48.510 } 00:30:48.510 ] 00:30:48.510 }' 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:48.510 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 [2024-12-06 18:29:19.797891] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.079 18:29:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.079 [2024-12-06 18:29:19.949316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.338 [2024-12-06 18:29:20.101910] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:30:49.338 [2024-12-06 18:29:20.102009] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:49.338 [2024-12-06 18:29:20.199939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:49.338 [2024-12-06 18:29:20.199995] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:49.338 [2024-12-06 18:29:20.200009] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.338 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.598 BaseBdev2 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.598 [ 00:30:49.598 { 00:30:49.598 "name": "BaseBdev2", 00:30:49.598 "aliases": [ 00:30:49.598 "e3dcf711-5f70-4dea-90d3-51f229eaab82" 00:30:49.598 ], 00:30:49.598 "product_name": "Malloc disk", 00:30:49.598 "block_size": 512, 00:30:49.598 "num_blocks": 65536, 00:30:49.598 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:49.598 "assigned_rate_limits": { 00:30:49.598 "rw_ios_per_sec": 0, 00:30:49.598 "rw_mbytes_per_sec": 0, 00:30:49.598 "r_mbytes_per_sec": 0, 00:30:49.598 "w_mbytes_per_sec": 0 00:30:49.598 }, 00:30:49.598 "claimed": false, 00:30:49.598 "zoned": false, 00:30:49.598 "supported_io_types": { 00:30:49.598 "read": true, 00:30:49.598 "write": true, 00:30:49.598 "unmap": true, 00:30:49.598 "flush": true, 00:30:49.598 "reset": true, 00:30:49.598 "nvme_admin": false, 00:30:49.598 "nvme_io": false, 00:30:49.598 "nvme_io_md": false, 00:30:49.598 "write_zeroes": true, 00:30:49.598 "zcopy": true, 00:30:49.598 "get_zone_info": false, 00:30:49.598 "zone_management": false, 00:30:49.598 "zone_append": false, 00:30:49.598 "compare": false, 00:30:49.598 "compare_and_write": false, 00:30:49.598 "abort": true, 00:30:49.598 "seek_hole": false, 00:30:49.598 "seek_data": false, 00:30:49.598 "copy": true, 00:30:49.598 "nvme_iov_md": false 00:30:49.598 }, 00:30:49.598 "memory_domains": [ 00:30:49.598 { 00:30:49.598 "dma_device_id": "system", 00:30:49.598 "dma_device_type": 1 00:30:49.598 }, 00:30:49.598 { 00:30:49.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:49.598 "dma_device_type": 2 00:30:49.598 } 00:30:49.598 ], 00:30:49.598 "driver_specific": {} 00:30:49.598 } 00:30:49.598 ] 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.598 BaseBdev3 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.598 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.598 [ 00:30:49.598 { 00:30:49.598 "name": "BaseBdev3", 00:30:49.598 "aliases": [ 00:30:49.598 "3811f040-c879-4b1a-b47f-2e7fe103f92c" 00:30:49.598 ], 00:30:49.598 "product_name": "Malloc disk", 00:30:49.598 "block_size": 512, 00:30:49.598 "num_blocks": 65536, 00:30:49.598 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:49.598 "assigned_rate_limits": { 00:30:49.598 "rw_ios_per_sec": 0, 00:30:49.598 "rw_mbytes_per_sec": 0, 00:30:49.598 "r_mbytes_per_sec": 0, 00:30:49.598 "w_mbytes_per_sec": 0 00:30:49.598 }, 00:30:49.598 "claimed": false, 00:30:49.598 "zoned": false, 00:30:49.598 "supported_io_types": { 00:30:49.598 "read": true, 00:30:49.598 "write": true, 00:30:49.598 "unmap": true, 00:30:49.598 "flush": true, 00:30:49.598 "reset": true, 00:30:49.598 "nvme_admin": false, 00:30:49.598 "nvme_io": false, 00:30:49.598 "nvme_io_md": false, 00:30:49.598 "write_zeroes": true, 00:30:49.598 "zcopy": true, 00:30:49.598 "get_zone_info": false, 00:30:49.598 "zone_management": false, 00:30:49.598 "zone_append": false, 00:30:49.598 "compare": false, 00:30:49.598 "compare_and_write": false, 00:30:49.598 "abort": true, 00:30:49.598 "seek_hole": false, 00:30:49.598 "seek_data": false, 00:30:49.598 "copy": true, 00:30:49.598 "nvme_iov_md": false 00:30:49.598 }, 00:30:49.598 "memory_domains": [ 00:30:49.598 { 00:30:49.598 "dma_device_id": "system", 00:30:49.598 "dma_device_type": 1 00:30:49.598 }, 00:30:49.598 { 00:30:49.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:49.598 "dma_device_type": 2 00:30:49.598 } 00:30:49.598 ], 00:30:49.599 "driver_specific": {} 00:30:49.599 } 00:30:49.599 ] 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.599 BaseBdev4 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.599 [ 00:30:49.599 { 00:30:49.599 "name": "BaseBdev4", 00:30:49.599 "aliases": [ 00:30:49.599 "4201a812-7870-4139-a968-e1826ed5c0a8" 00:30:49.599 ], 00:30:49.599 "product_name": "Malloc disk", 00:30:49.599 "block_size": 512, 00:30:49.599 "num_blocks": 65536, 00:30:49.599 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:49.599 "assigned_rate_limits": { 00:30:49.599 "rw_ios_per_sec": 0, 00:30:49.599 "rw_mbytes_per_sec": 0, 00:30:49.599 "r_mbytes_per_sec": 0, 00:30:49.599 "w_mbytes_per_sec": 0 00:30:49.599 }, 00:30:49.599 "claimed": false, 00:30:49.599 "zoned": false, 00:30:49.599 "supported_io_types": { 00:30:49.599 "read": true, 00:30:49.599 "write": true, 00:30:49.599 "unmap": true, 00:30:49.599 "flush": true, 00:30:49.599 "reset": true, 00:30:49.599 "nvme_admin": false, 00:30:49.599 "nvme_io": false, 00:30:49.599 "nvme_io_md": false, 00:30:49.599 "write_zeroes": true, 00:30:49.599 "zcopy": true, 00:30:49.599 "get_zone_info": false, 00:30:49.599 "zone_management": false, 00:30:49.599 "zone_append": false, 00:30:49.599 "compare": false, 00:30:49.599 "compare_and_write": false, 00:30:49.599 "abort": true, 00:30:49.599 "seek_hole": false, 00:30:49.599 "seek_data": false, 00:30:49.599 "copy": true, 00:30:49.599 "nvme_iov_md": false 00:30:49.599 }, 00:30:49.599 "memory_domains": [ 00:30:49.599 { 00:30:49.599 "dma_device_id": "system", 00:30:49.599 "dma_device_type": 1 00:30:49.599 }, 00:30:49.599 { 00:30:49.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:49.599 "dma_device_type": 2 00:30:49.599 } 00:30:49.599 ], 00:30:49.599 "driver_specific": {} 00:30:49.599 } 00:30:49.599 ] 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.599 [2024-12-06 18:29:20.520945] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:30:49.599 [2024-12-06 18:29:20.521000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:30:49.599 [2024-12-06 18:29:20.521020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:49.599 [2024-12-06 18:29:20.523296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:49.599 [2024-12-06 18:29:20.523345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:49.599 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:49.859 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.859 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:49.859 "name": "Existed_Raid", 00:30:49.859 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:49.859 "strip_size_kb": 0, 00:30:49.859 "state": "configuring", 00:30:49.859 "raid_level": "raid1", 00:30:49.859 "superblock": true, 00:30:49.859 "num_base_bdevs": 4, 00:30:49.859 "num_base_bdevs_discovered": 3, 00:30:49.859 "num_base_bdevs_operational": 4, 00:30:49.859 "base_bdevs_list": [ 00:30:49.859 { 00:30:49.859 "name": "BaseBdev1", 00:30:49.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:49.859 "is_configured": false, 00:30:49.859 "data_offset": 0, 00:30:49.859 "data_size": 0 00:30:49.859 }, 00:30:49.859 { 00:30:49.859 "name": "BaseBdev2", 00:30:49.859 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:49.859 "is_configured": true, 00:30:49.859 "data_offset": 2048, 00:30:49.859 "data_size": 63488 00:30:49.859 }, 00:30:49.859 { 00:30:49.859 "name": "BaseBdev3", 00:30:49.859 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:49.859 "is_configured": true, 00:30:49.859 "data_offset": 2048, 00:30:49.859 "data_size": 63488 00:30:49.859 }, 00:30:49.859 { 00:30:49.859 "name": "BaseBdev4", 00:30:49.859 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:49.859 "is_configured": true, 00:30:49.859 "data_offset": 2048, 00:30:49.859 "data_size": 63488 00:30:49.859 } 00:30:49.859 ] 00:30:49.859 }' 00:30:49.859 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:49.859 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.118 [2024-12-06 18:29:20.928385] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:50.118 "name": "Existed_Raid", 00:30:50.118 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:50.118 "strip_size_kb": 0, 00:30:50.118 "state": "configuring", 00:30:50.118 "raid_level": "raid1", 00:30:50.118 "superblock": true, 00:30:50.118 "num_base_bdevs": 4, 00:30:50.118 "num_base_bdevs_discovered": 2, 00:30:50.118 "num_base_bdevs_operational": 4, 00:30:50.118 "base_bdevs_list": [ 00:30:50.118 { 00:30:50.118 "name": "BaseBdev1", 00:30:50.118 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:50.118 "is_configured": false, 00:30:50.118 "data_offset": 0, 00:30:50.118 "data_size": 0 00:30:50.118 }, 00:30:50.118 { 00:30:50.118 "name": null, 00:30:50.118 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:50.118 "is_configured": false, 00:30:50.118 "data_offset": 0, 00:30:50.118 "data_size": 63488 00:30:50.118 }, 00:30:50.118 { 00:30:50.118 "name": "BaseBdev3", 00:30:50.118 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:50.118 "is_configured": true, 00:30:50.118 "data_offset": 2048, 00:30:50.118 "data_size": 63488 00:30:50.118 }, 00:30:50.118 { 00:30:50.118 "name": "BaseBdev4", 00:30:50.118 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:50.118 "is_configured": true, 00:30:50.118 "data_offset": 2048, 00:30:50.118 "data_size": 63488 00:30:50.118 } 00:30:50.118 ] 00:30:50.118 }' 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:50.118 18:29:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.689 [2024-12-06 18:29:21.433892] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:30:50.689 BaseBdev1 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.689 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.689 [ 00:30:50.689 { 00:30:50.689 "name": "BaseBdev1", 00:30:50.689 "aliases": [ 00:30:50.689 "85ea0930-efd8-4612-adc3-714b35301799" 00:30:50.689 ], 00:30:50.689 "product_name": "Malloc disk", 00:30:50.689 "block_size": 512, 00:30:50.689 "num_blocks": 65536, 00:30:50.690 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:50.690 "assigned_rate_limits": { 00:30:50.690 "rw_ios_per_sec": 0, 00:30:50.690 "rw_mbytes_per_sec": 0, 00:30:50.690 "r_mbytes_per_sec": 0, 00:30:50.690 "w_mbytes_per_sec": 0 00:30:50.690 }, 00:30:50.690 "claimed": true, 00:30:50.690 "claim_type": "exclusive_write", 00:30:50.690 "zoned": false, 00:30:50.690 "supported_io_types": { 00:30:50.690 "read": true, 00:30:50.690 "write": true, 00:30:50.690 "unmap": true, 00:30:50.690 "flush": true, 00:30:50.690 "reset": true, 00:30:50.690 "nvme_admin": false, 00:30:50.690 "nvme_io": false, 00:30:50.690 "nvme_io_md": false, 00:30:50.690 "write_zeroes": true, 00:30:50.690 "zcopy": true, 00:30:50.690 "get_zone_info": false, 00:30:50.690 "zone_management": false, 00:30:50.690 "zone_append": false, 00:30:50.690 "compare": false, 00:30:50.690 "compare_and_write": false, 00:30:50.690 "abort": true, 00:30:50.690 "seek_hole": false, 00:30:50.690 "seek_data": false, 00:30:50.690 "copy": true, 00:30:50.690 "nvme_iov_md": false 00:30:50.690 }, 00:30:50.690 "memory_domains": [ 00:30:50.690 { 00:30:50.690 "dma_device_id": "system", 00:30:50.690 "dma_device_type": 1 00:30:50.690 }, 00:30:50.690 { 00:30:50.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:50.690 "dma_device_type": 2 00:30:50.690 } 00:30:50.690 ], 00:30:50.690 "driver_specific": {} 00:30:50.690 } 00:30:50.690 ] 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:50.690 "name": "Existed_Raid", 00:30:50.690 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:50.690 "strip_size_kb": 0, 00:30:50.690 "state": "configuring", 00:30:50.690 "raid_level": "raid1", 00:30:50.690 "superblock": true, 00:30:50.690 "num_base_bdevs": 4, 00:30:50.690 "num_base_bdevs_discovered": 3, 00:30:50.690 "num_base_bdevs_operational": 4, 00:30:50.690 "base_bdevs_list": [ 00:30:50.690 { 00:30:50.690 "name": "BaseBdev1", 00:30:50.690 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:50.690 "is_configured": true, 00:30:50.690 "data_offset": 2048, 00:30:50.690 "data_size": 63488 00:30:50.690 }, 00:30:50.690 { 00:30:50.690 "name": null, 00:30:50.690 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:50.690 "is_configured": false, 00:30:50.690 "data_offset": 0, 00:30:50.690 "data_size": 63488 00:30:50.690 }, 00:30:50.690 { 00:30:50.690 "name": "BaseBdev3", 00:30:50.690 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:50.690 "is_configured": true, 00:30:50.690 "data_offset": 2048, 00:30:50.690 "data_size": 63488 00:30:50.690 }, 00:30:50.690 { 00:30:50.690 "name": "BaseBdev4", 00:30:50.690 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:50.690 "is_configured": true, 00:30:50.690 "data_offset": 2048, 00:30:50.690 "data_size": 63488 00:30:50.690 } 00:30:50.690 ] 00:30:50.690 }' 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:50.690 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:50.949 [2024-12-06 18:29:21.881924] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.949 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:51.218 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.218 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:51.218 "name": "Existed_Raid", 00:30:51.218 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:51.218 "strip_size_kb": 0, 00:30:51.218 "state": "configuring", 00:30:51.218 "raid_level": "raid1", 00:30:51.218 "superblock": true, 00:30:51.218 "num_base_bdevs": 4, 00:30:51.218 "num_base_bdevs_discovered": 2, 00:30:51.218 "num_base_bdevs_operational": 4, 00:30:51.218 "base_bdevs_list": [ 00:30:51.218 { 00:30:51.218 "name": "BaseBdev1", 00:30:51.218 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:51.218 "is_configured": true, 00:30:51.218 "data_offset": 2048, 00:30:51.218 "data_size": 63488 00:30:51.218 }, 00:30:51.218 { 00:30:51.218 "name": null, 00:30:51.218 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:51.218 "is_configured": false, 00:30:51.218 "data_offset": 0, 00:30:51.218 "data_size": 63488 00:30:51.218 }, 00:30:51.218 { 00:30:51.218 "name": null, 00:30:51.218 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:51.218 "is_configured": false, 00:30:51.218 "data_offset": 0, 00:30:51.218 "data_size": 63488 00:30:51.218 }, 00:30:51.218 { 00:30:51.218 "name": "BaseBdev4", 00:30:51.218 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:51.218 "is_configured": true, 00:30:51.218 "data_offset": 2048, 00:30:51.218 "data_size": 63488 00:30:51.218 } 00:30:51.218 ] 00:30:51.218 }' 00:30:51.218 18:29:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:51.218 18:29:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:51.477 [2024-12-06 18:29:22.377611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:51.477 "name": "Existed_Raid", 00:30:51.477 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:51.477 "strip_size_kb": 0, 00:30:51.477 "state": "configuring", 00:30:51.477 "raid_level": "raid1", 00:30:51.477 "superblock": true, 00:30:51.477 "num_base_bdevs": 4, 00:30:51.477 "num_base_bdevs_discovered": 3, 00:30:51.477 "num_base_bdevs_operational": 4, 00:30:51.477 "base_bdevs_list": [ 00:30:51.477 { 00:30:51.477 "name": "BaseBdev1", 00:30:51.477 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:51.477 "is_configured": true, 00:30:51.477 "data_offset": 2048, 00:30:51.477 "data_size": 63488 00:30:51.477 }, 00:30:51.477 { 00:30:51.477 "name": null, 00:30:51.477 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:51.477 "is_configured": false, 00:30:51.477 "data_offset": 0, 00:30:51.477 "data_size": 63488 00:30:51.477 }, 00:30:51.477 { 00:30:51.477 "name": "BaseBdev3", 00:30:51.477 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:51.477 "is_configured": true, 00:30:51.477 "data_offset": 2048, 00:30:51.477 "data_size": 63488 00:30:51.477 }, 00:30:51.477 { 00:30:51.477 "name": "BaseBdev4", 00:30:51.477 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:51.477 "is_configured": true, 00:30:51.477 "data_offset": 2048, 00:30:51.477 "data_size": 63488 00:30:51.477 } 00:30:51.477 ] 00:30:51.477 }' 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:51.477 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.045 [2024-12-06 18:29:22.821026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:52.045 "name": "Existed_Raid", 00:30:52.045 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:52.045 "strip_size_kb": 0, 00:30:52.045 "state": "configuring", 00:30:52.045 "raid_level": "raid1", 00:30:52.045 "superblock": true, 00:30:52.045 "num_base_bdevs": 4, 00:30:52.045 "num_base_bdevs_discovered": 2, 00:30:52.045 "num_base_bdevs_operational": 4, 00:30:52.045 "base_bdevs_list": [ 00:30:52.045 { 00:30:52.045 "name": null, 00:30:52.045 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:52.045 "is_configured": false, 00:30:52.045 "data_offset": 0, 00:30:52.045 "data_size": 63488 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "name": null, 00:30:52.045 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:52.045 "is_configured": false, 00:30:52.045 "data_offset": 0, 00:30:52.045 "data_size": 63488 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "name": "BaseBdev3", 00:30:52.045 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:52.045 "is_configured": true, 00:30:52.045 "data_offset": 2048, 00:30:52.045 "data_size": 63488 00:30:52.045 }, 00:30:52.045 { 00:30:52.045 "name": "BaseBdev4", 00:30:52.045 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:52.045 "is_configured": true, 00:30:52.045 "data_offset": 2048, 00:30:52.045 "data_size": 63488 00:30:52.045 } 00:30:52.045 ] 00:30:52.045 }' 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:52.045 18:29:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.614 [2024-12-06 18:29:23.409354] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:52.614 "name": "Existed_Raid", 00:30:52.614 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:52.614 "strip_size_kb": 0, 00:30:52.614 "state": "configuring", 00:30:52.614 "raid_level": "raid1", 00:30:52.614 "superblock": true, 00:30:52.614 "num_base_bdevs": 4, 00:30:52.614 "num_base_bdevs_discovered": 3, 00:30:52.614 "num_base_bdevs_operational": 4, 00:30:52.614 "base_bdevs_list": [ 00:30:52.614 { 00:30:52.614 "name": null, 00:30:52.614 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:52.614 "is_configured": false, 00:30:52.614 "data_offset": 0, 00:30:52.614 "data_size": 63488 00:30:52.614 }, 00:30:52.614 { 00:30:52.614 "name": "BaseBdev2", 00:30:52.614 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:52.614 "is_configured": true, 00:30:52.614 "data_offset": 2048, 00:30:52.614 "data_size": 63488 00:30:52.614 }, 00:30:52.614 { 00:30:52.614 "name": "BaseBdev3", 00:30:52.614 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:52.614 "is_configured": true, 00:30:52.614 "data_offset": 2048, 00:30:52.614 "data_size": 63488 00:30:52.614 }, 00:30:52.614 { 00:30:52.614 "name": "BaseBdev4", 00:30:52.614 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:52.614 "is_configured": true, 00:30:52.614 "data_offset": 2048, 00:30:52.614 "data_size": 63488 00:30:52.614 } 00:30:52.614 ] 00:30:52.614 }' 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:52.614 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.182 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.182 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.182 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.182 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:30:53.182 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.182 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:30:53.182 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 85ea0930-efd8-4612-adc3-714b35301799 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.183 [2024-12-06 18:29:23.975666] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:30:53.183 [2024-12-06 18:29:23.975880] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:30:53.183 [2024-12-06 18:29:23.975899] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:53.183 [2024-12-06 18:29:23.976186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:30:53.183 [2024-12-06 18:29:23.976337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:30:53.183 [2024-12-06 18:29:23.976348] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:30:53.183 [2024-12-06 18:29:23.976480] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:53.183 NewBaseBdev 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.183 18:29:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.183 [ 00:30:53.183 { 00:30:53.183 "name": "NewBaseBdev", 00:30:53.183 "aliases": [ 00:30:53.183 "85ea0930-efd8-4612-adc3-714b35301799" 00:30:53.183 ], 00:30:53.183 "product_name": "Malloc disk", 00:30:53.183 "block_size": 512, 00:30:53.183 "num_blocks": 65536, 00:30:53.183 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:53.183 "assigned_rate_limits": { 00:30:53.183 "rw_ios_per_sec": 0, 00:30:53.183 "rw_mbytes_per_sec": 0, 00:30:53.183 "r_mbytes_per_sec": 0, 00:30:53.183 "w_mbytes_per_sec": 0 00:30:53.183 }, 00:30:53.183 "claimed": true, 00:30:53.183 "claim_type": "exclusive_write", 00:30:53.183 "zoned": false, 00:30:53.183 "supported_io_types": { 00:30:53.183 "read": true, 00:30:53.183 "write": true, 00:30:53.183 "unmap": true, 00:30:53.183 "flush": true, 00:30:53.183 "reset": true, 00:30:53.183 "nvme_admin": false, 00:30:53.183 "nvme_io": false, 00:30:53.183 "nvme_io_md": false, 00:30:53.183 "write_zeroes": true, 00:30:53.183 "zcopy": true, 00:30:53.183 "get_zone_info": false, 00:30:53.183 "zone_management": false, 00:30:53.183 "zone_append": false, 00:30:53.183 "compare": false, 00:30:53.183 "compare_and_write": false, 00:30:53.183 "abort": true, 00:30:53.183 "seek_hole": false, 00:30:53.183 "seek_data": false, 00:30:53.183 "copy": true, 00:30:53.183 "nvme_iov_md": false 00:30:53.183 }, 00:30:53.183 "memory_domains": [ 00:30:53.183 { 00:30:53.183 "dma_device_id": "system", 00:30:53.183 "dma_device_type": 1 00:30:53.183 }, 00:30:53.183 { 00:30:53.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:53.183 "dma_device_type": 2 00:30:53.183 } 00:30:53.183 ], 00:30:53.183 "driver_specific": {} 00:30:53.183 } 00:30:53.183 ] 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.183 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:53.183 "name": "Existed_Raid", 00:30:53.183 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:53.183 "strip_size_kb": 0, 00:30:53.183 "state": "online", 00:30:53.183 "raid_level": "raid1", 00:30:53.183 "superblock": true, 00:30:53.183 "num_base_bdevs": 4, 00:30:53.183 "num_base_bdevs_discovered": 4, 00:30:53.183 "num_base_bdevs_operational": 4, 00:30:53.183 "base_bdevs_list": [ 00:30:53.183 { 00:30:53.183 "name": "NewBaseBdev", 00:30:53.183 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:53.183 "is_configured": true, 00:30:53.183 "data_offset": 2048, 00:30:53.183 "data_size": 63488 00:30:53.183 }, 00:30:53.183 { 00:30:53.183 "name": "BaseBdev2", 00:30:53.183 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:53.183 "is_configured": true, 00:30:53.184 "data_offset": 2048, 00:30:53.184 "data_size": 63488 00:30:53.184 }, 00:30:53.184 { 00:30:53.184 "name": "BaseBdev3", 00:30:53.184 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:53.184 "is_configured": true, 00:30:53.184 "data_offset": 2048, 00:30:53.184 "data_size": 63488 00:30:53.184 }, 00:30:53.184 { 00:30:53.184 "name": "BaseBdev4", 00:30:53.184 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:53.184 "is_configured": true, 00:30:53.184 "data_offset": 2048, 00:30:53.184 "data_size": 63488 00:30:53.184 } 00:30:53.184 ] 00:30:53.184 }' 00:30:53.184 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:53.184 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:53.752 [2024-12-06 18:29:24.499427] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.752 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:53.752 "name": "Existed_Raid", 00:30:53.752 "aliases": [ 00:30:53.752 "38fc3edf-b95a-4908-b3fb-2ceefdb4542e" 00:30:53.752 ], 00:30:53.752 "product_name": "Raid Volume", 00:30:53.752 "block_size": 512, 00:30:53.752 "num_blocks": 63488, 00:30:53.752 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:53.752 "assigned_rate_limits": { 00:30:53.752 "rw_ios_per_sec": 0, 00:30:53.752 "rw_mbytes_per_sec": 0, 00:30:53.752 "r_mbytes_per_sec": 0, 00:30:53.752 "w_mbytes_per_sec": 0 00:30:53.752 }, 00:30:53.752 "claimed": false, 00:30:53.752 "zoned": false, 00:30:53.752 "supported_io_types": { 00:30:53.752 "read": true, 00:30:53.752 "write": true, 00:30:53.752 "unmap": false, 00:30:53.752 "flush": false, 00:30:53.752 "reset": true, 00:30:53.752 "nvme_admin": false, 00:30:53.752 "nvme_io": false, 00:30:53.752 "nvme_io_md": false, 00:30:53.752 "write_zeroes": true, 00:30:53.752 "zcopy": false, 00:30:53.752 "get_zone_info": false, 00:30:53.752 "zone_management": false, 00:30:53.752 "zone_append": false, 00:30:53.752 "compare": false, 00:30:53.752 "compare_and_write": false, 00:30:53.752 "abort": false, 00:30:53.752 "seek_hole": false, 00:30:53.752 "seek_data": false, 00:30:53.752 "copy": false, 00:30:53.752 "nvme_iov_md": false 00:30:53.752 }, 00:30:53.752 "memory_domains": [ 00:30:53.752 { 00:30:53.752 "dma_device_id": "system", 00:30:53.752 "dma_device_type": 1 00:30:53.752 }, 00:30:53.752 { 00:30:53.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:53.752 "dma_device_type": 2 00:30:53.752 }, 00:30:53.752 { 00:30:53.752 "dma_device_id": "system", 00:30:53.752 "dma_device_type": 1 00:30:53.752 }, 00:30:53.752 { 00:30:53.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:53.752 "dma_device_type": 2 00:30:53.752 }, 00:30:53.752 { 00:30:53.752 "dma_device_id": "system", 00:30:53.752 "dma_device_type": 1 00:30:53.752 }, 00:30:53.752 { 00:30:53.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:53.752 "dma_device_type": 2 00:30:53.752 }, 00:30:53.752 { 00:30:53.752 "dma_device_id": "system", 00:30:53.752 "dma_device_type": 1 00:30:53.752 }, 00:30:53.752 { 00:30:53.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:53.752 "dma_device_type": 2 00:30:53.752 } 00:30:53.752 ], 00:30:53.752 "driver_specific": { 00:30:53.752 "raid": { 00:30:53.752 "uuid": "38fc3edf-b95a-4908-b3fb-2ceefdb4542e", 00:30:53.752 "strip_size_kb": 0, 00:30:53.752 "state": "online", 00:30:53.752 "raid_level": "raid1", 00:30:53.752 "superblock": true, 00:30:53.752 "num_base_bdevs": 4, 00:30:53.752 "num_base_bdevs_discovered": 4, 00:30:53.752 "num_base_bdevs_operational": 4, 00:30:53.752 "base_bdevs_list": [ 00:30:53.752 { 00:30:53.752 "name": "NewBaseBdev", 00:30:53.752 "uuid": "85ea0930-efd8-4612-adc3-714b35301799", 00:30:53.752 "is_configured": true, 00:30:53.752 "data_offset": 2048, 00:30:53.752 "data_size": 63488 00:30:53.752 }, 00:30:53.752 { 00:30:53.752 "name": "BaseBdev2", 00:30:53.752 "uuid": "e3dcf711-5f70-4dea-90d3-51f229eaab82", 00:30:53.752 "is_configured": true, 00:30:53.753 "data_offset": 2048, 00:30:53.753 "data_size": 63488 00:30:53.753 }, 00:30:53.753 { 00:30:53.753 "name": "BaseBdev3", 00:30:53.753 "uuid": "3811f040-c879-4b1a-b47f-2e7fe103f92c", 00:30:53.753 "is_configured": true, 00:30:53.753 "data_offset": 2048, 00:30:53.753 "data_size": 63488 00:30:53.753 }, 00:30:53.753 { 00:30:53.753 "name": "BaseBdev4", 00:30:53.753 "uuid": "4201a812-7870-4139-a968-e1826ed5c0a8", 00:30:53.753 "is_configured": true, 00:30:53.753 "data_offset": 2048, 00:30:53.753 "data_size": 63488 00:30:53.753 } 00:30:53.753 ] 00:30:53.753 } 00:30:53.753 } 00:30:53.753 }' 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:30:53.753 BaseBdev2 00:30:53.753 BaseBdev3 00:30:53.753 BaseBdev4' 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.753 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:54.012 [2024-12-06 18:29:24.806615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:30:54.012 [2024-12-06 18:29:24.806647] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:54.012 [2024-12-06 18:29:24.806728] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:54.012 [2024-12-06 18:29:24.807041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:54.012 [2024-12-06 18:29:24.807058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73573 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73573 ']' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73573 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73573 00:30:54.012 killing process with pid 73573 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73573' 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73573 00:30:54.012 [2024-12-06 18:29:24.854469] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:30:54.012 18:29:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73573 00:30:54.575 [2024-12-06 18:29:25.259322] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:30:55.510 18:29:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:30:55.510 00:30:55.510 real 0m11.407s 00:30:55.510 user 0m17.995s 00:30:55.510 sys 0m2.380s 00:30:55.510 18:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.510 ************************************ 00:30:55.510 END TEST raid_state_function_test_sb 00:30:55.510 ************************************ 00:30:55.510 18:29:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:30:55.768 18:29:26 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:30:55.768 18:29:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:55.768 18:29:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:55.768 18:29:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:30:55.768 ************************************ 00:30:55.768 START TEST raid_superblock_test 00:30:55.768 ************************************ 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:30:55.768 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74238 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74238 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74238 ']' 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:55.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:55.769 18:29:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:55.769 [2024-12-06 18:29:26.604181] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:30:55.769 [2024-12-06 18:29:26.604342] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74238 ] 00:30:56.027 [2024-12-06 18:29:26.792954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:56.027 [2024-12-06 18:29:26.947096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.286 [2024-12-06 18:29:27.183229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:56.286 [2024-12-06 18:29:27.183471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:56.544 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.545 malloc1 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.545 [2024-12-06 18:29:27.480211] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:56.545 [2024-12-06 18:29:27.480397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.545 [2024-12-06 18:29:27.480458] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:30:56.545 [2024-12-06 18:29:27.480699] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.545 [2024-12-06 18:29:27.483201] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.545 [2024-12-06 18:29:27.483241] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:56.545 pt1 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.545 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.804 malloc2 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.804 [2024-12-06 18:29:27.536322] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:56.804 [2024-12-06 18:29:27.536491] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.804 [2024-12-06 18:29:27.536598] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:30:56.804 [2024-12-06 18:29:27.536695] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.804 [2024-12-06 18:29:27.539135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.804 [2024-12-06 18:29:27.539289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:56.804 pt2 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.804 malloc3 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.804 [2024-12-06 18:29:27.610183] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:56.804 [2024-12-06 18:29:27.610343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.804 [2024-12-06 18:29:27.610376] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:30:56.804 [2024-12-06 18:29:27.610389] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.804 [2024-12-06 18:29:27.612707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.804 [2024-12-06 18:29:27.612748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:56.804 pt3 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.804 malloc4 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.804 [2024-12-06 18:29:27.665097] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:56.804 [2024-12-06 18:29:27.665169] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:56.804 [2024-12-06 18:29:27.665192] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:30:56.804 [2024-12-06 18:29:27.665204] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:56.804 [2024-12-06 18:29:27.667534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:56.804 [2024-12-06 18:29:27.667574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:56.804 pt4 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.804 [2024-12-06 18:29:27.677119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:56.804 [2024-12-06 18:29:27.679289] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:56.804 [2024-12-06 18:29:27.679366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:56.804 [2024-12-06 18:29:27.679431] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:56.804 [2024-12-06 18:29:27.679607] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:30:56.804 [2024-12-06 18:29:27.679625] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:56.804 [2024-12-06 18:29:27.679883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:30:56.804 [2024-12-06 18:29:27.680050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:30:56.804 [2024-12-06 18:29:27.680068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:30:56.804 [2024-12-06 18:29:27.680227] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.804 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:56.804 "name": "raid_bdev1", 00:30:56.804 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:56.804 "strip_size_kb": 0, 00:30:56.804 "state": "online", 00:30:56.804 "raid_level": "raid1", 00:30:56.804 "superblock": true, 00:30:56.804 "num_base_bdevs": 4, 00:30:56.804 "num_base_bdevs_discovered": 4, 00:30:56.804 "num_base_bdevs_operational": 4, 00:30:56.805 "base_bdevs_list": [ 00:30:56.805 { 00:30:56.805 "name": "pt1", 00:30:56.805 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:56.805 "is_configured": true, 00:30:56.805 "data_offset": 2048, 00:30:56.805 "data_size": 63488 00:30:56.805 }, 00:30:56.805 { 00:30:56.805 "name": "pt2", 00:30:56.805 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:56.805 "is_configured": true, 00:30:56.805 "data_offset": 2048, 00:30:56.805 "data_size": 63488 00:30:56.805 }, 00:30:56.805 { 00:30:56.805 "name": "pt3", 00:30:56.805 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:56.805 "is_configured": true, 00:30:56.805 "data_offset": 2048, 00:30:56.805 "data_size": 63488 00:30:56.805 }, 00:30:56.805 { 00:30:56.805 "name": "pt4", 00:30:56.805 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:56.805 "is_configured": true, 00:30:56.805 "data_offset": 2048, 00:30:56.805 "data_size": 63488 00:30:56.805 } 00:30:56.805 ] 00:30:56.805 }' 00:30:56.805 18:29:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:56.805 18:29:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.371 [2024-12-06 18:29:28.124804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.371 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:57.371 "name": "raid_bdev1", 00:30:57.371 "aliases": [ 00:30:57.371 "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64" 00:30:57.371 ], 00:30:57.371 "product_name": "Raid Volume", 00:30:57.371 "block_size": 512, 00:30:57.371 "num_blocks": 63488, 00:30:57.371 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:57.371 "assigned_rate_limits": { 00:30:57.371 "rw_ios_per_sec": 0, 00:30:57.371 "rw_mbytes_per_sec": 0, 00:30:57.371 "r_mbytes_per_sec": 0, 00:30:57.371 "w_mbytes_per_sec": 0 00:30:57.371 }, 00:30:57.371 "claimed": false, 00:30:57.371 "zoned": false, 00:30:57.371 "supported_io_types": { 00:30:57.371 "read": true, 00:30:57.371 "write": true, 00:30:57.371 "unmap": false, 00:30:57.371 "flush": false, 00:30:57.371 "reset": true, 00:30:57.371 "nvme_admin": false, 00:30:57.371 "nvme_io": false, 00:30:57.371 "nvme_io_md": false, 00:30:57.371 "write_zeroes": true, 00:30:57.371 "zcopy": false, 00:30:57.371 "get_zone_info": false, 00:30:57.372 "zone_management": false, 00:30:57.372 "zone_append": false, 00:30:57.372 "compare": false, 00:30:57.372 "compare_and_write": false, 00:30:57.372 "abort": false, 00:30:57.372 "seek_hole": false, 00:30:57.372 "seek_data": false, 00:30:57.372 "copy": false, 00:30:57.372 "nvme_iov_md": false 00:30:57.372 }, 00:30:57.372 "memory_domains": [ 00:30:57.372 { 00:30:57.372 "dma_device_id": "system", 00:30:57.372 "dma_device_type": 1 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.372 "dma_device_type": 2 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "dma_device_id": "system", 00:30:57.372 "dma_device_type": 1 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.372 "dma_device_type": 2 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "dma_device_id": "system", 00:30:57.372 "dma_device_type": 1 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.372 "dma_device_type": 2 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "dma_device_id": "system", 00:30:57.372 "dma_device_type": 1 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:57.372 "dma_device_type": 2 00:30:57.372 } 00:30:57.372 ], 00:30:57.372 "driver_specific": { 00:30:57.372 "raid": { 00:30:57.372 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:57.372 "strip_size_kb": 0, 00:30:57.372 "state": "online", 00:30:57.372 "raid_level": "raid1", 00:30:57.372 "superblock": true, 00:30:57.372 "num_base_bdevs": 4, 00:30:57.372 "num_base_bdevs_discovered": 4, 00:30:57.372 "num_base_bdevs_operational": 4, 00:30:57.372 "base_bdevs_list": [ 00:30:57.372 { 00:30:57.372 "name": "pt1", 00:30:57.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:57.372 "is_configured": true, 00:30:57.372 "data_offset": 2048, 00:30:57.372 "data_size": 63488 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "name": "pt2", 00:30:57.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:57.372 "is_configured": true, 00:30:57.372 "data_offset": 2048, 00:30:57.372 "data_size": 63488 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "name": "pt3", 00:30:57.372 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:57.372 "is_configured": true, 00:30:57.372 "data_offset": 2048, 00:30:57.372 "data_size": 63488 00:30:57.372 }, 00:30:57.372 { 00:30:57.372 "name": "pt4", 00:30:57.372 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:57.372 "is_configured": true, 00:30:57.372 "data_offset": 2048, 00:30:57.372 "data_size": 63488 00:30:57.372 } 00:30:57.372 ] 00:30:57.372 } 00:30:57.372 } 00:30:57.372 }' 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:57.372 pt2 00:30:57.372 pt3 00:30:57.372 pt4' 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:57.372 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.631 [2024-12-06 18:29:28.416465] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=d9ffbc25-b70d-4d44-acce-7bacd4f2ba64 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z d9ffbc25-b70d-4d44-acce-7bacd4f2ba64 ']' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.631 [2024-12-06 18:29:28.468082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:57.631 [2024-12-06 18:29:28.468122] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:30:57.631 [2024-12-06 18:29:28.468262] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:30:57.631 [2024-12-06 18:29:28.468413] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:30:57.631 [2024-12-06 18:29:28.468446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.631 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.632 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.891 [2024-12-06 18:29:28.623913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:30:57.891 [2024-12-06 18:29:28.626548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:30:57.891 [2024-12-06 18:29:28.626835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:30:57.891 [2024-12-06 18:29:28.626928] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:30:57.891 [2024-12-06 18:29:28.627006] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:30:57.891 [2024-12-06 18:29:28.627091] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:30:57.891 [2024-12-06 18:29:28.627130] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:30:57.891 [2024-12-06 18:29:28.627192] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:30:57.891 [2024-12-06 18:29:28.627218] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:30:57.891 [2024-12-06 18:29:28.627238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:30:57.891 request: 00:30:57.891 { 00:30:57.891 "name": "raid_bdev1", 00:30:57.891 "raid_level": "raid1", 00:30:57.891 "base_bdevs": [ 00:30:57.891 "malloc1", 00:30:57.891 "malloc2", 00:30:57.891 "malloc3", 00:30:57.891 "malloc4" 00:30:57.891 ], 00:30:57.891 "superblock": false, 00:30:57.891 "method": "bdev_raid_create", 00:30:57.891 "req_id": 1 00:30:57.891 } 00:30:57.891 Got JSON-RPC error response 00:30:57.891 response: 00:30:57.891 { 00:30:57.891 "code": -17, 00:30:57.891 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:30:57.891 } 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.891 [2024-12-06 18:29:28.687960] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:30:57.891 [2024-12-06 18:29:28.688027] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:57.891 [2024-12-06 18:29:28.688048] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:30:57.891 [2024-12-06 18:29:28.688062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:57.891 [2024-12-06 18:29:28.690566] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:57.891 [2024-12-06 18:29:28.690620] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:30:57.891 [2024-12-06 18:29:28.690712] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:30:57.891 [2024-12-06 18:29:28.690773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:30:57.891 pt1 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:57.891 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:57.891 "name": "raid_bdev1", 00:30:57.891 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:57.891 "strip_size_kb": 0, 00:30:57.892 "state": "configuring", 00:30:57.892 "raid_level": "raid1", 00:30:57.892 "superblock": true, 00:30:57.892 "num_base_bdevs": 4, 00:30:57.892 "num_base_bdevs_discovered": 1, 00:30:57.892 "num_base_bdevs_operational": 4, 00:30:57.892 "base_bdevs_list": [ 00:30:57.892 { 00:30:57.892 "name": "pt1", 00:30:57.892 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:57.892 "is_configured": true, 00:30:57.892 "data_offset": 2048, 00:30:57.892 "data_size": 63488 00:30:57.892 }, 00:30:57.892 { 00:30:57.892 "name": null, 00:30:57.892 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:57.892 "is_configured": false, 00:30:57.892 "data_offset": 2048, 00:30:57.892 "data_size": 63488 00:30:57.892 }, 00:30:57.892 { 00:30:57.892 "name": null, 00:30:57.892 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:57.892 "is_configured": false, 00:30:57.892 "data_offset": 2048, 00:30:57.892 "data_size": 63488 00:30:57.892 }, 00:30:57.892 { 00:30:57.892 "name": null, 00:30:57.892 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:57.892 "is_configured": false, 00:30:57.892 "data_offset": 2048, 00:30:57.892 "data_size": 63488 00:30:57.892 } 00:30:57.892 ] 00:30:57.892 }' 00:30:57.892 18:29:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:57.892 18:29:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.150 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:30:58.150 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:58.150 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.150 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.408 [2024-12-06 18:29:29.099500] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:58.408 [2024-12-06 18:29:29.099618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.408 [2024-12-06 18:29:29.099655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:30:58.408 [2024-12-06 18:29:29.099674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.408 [2024-12-06 18:29:29.100588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.409 [2024-12-06 18:29:29.100616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:58.409 [2024-12-06 18:29:29.100779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:58.409 [2024-12-06 18:29:29.100818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:58.409 pt2 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.409 [2024-12-06 18:29:29.107401] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:58.409 "name": "raid_bdev1", 00:30:58.409 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:58.409 "strip_size_kb": 0, 00:30:58.409 "state": "configuring", 00:30:58.409 "raid_level": "raid1", 00:30:58.409 "superblock": true, 00:30:58.409 "num_base_bdevs": 4, 00:30:58.409 "num_base_bdevs_discovered": 1, 00:30:58.409 "num_base_bdevs_operational": 4, 00:30:58.409 "base_bdevs_list": [ 00:30:58.409 { 00:30:58.409 "name": "pt1", 00:30:58.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:58.409 "is_configured": true, 00:30:58.409 "data_offset": 2048, 00:30:58.409 "data_size": 63488 00:30:58.409 }, 00:30:58.409 { 00:30:58.409 "name": null, 00:30:58.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:58.409 "is_configured": false, 00:30:58.409 "data_offset": 0, 00:30:58.409 "data_size": 63488 00:30:58.409 }, 00:30:58.409 { 00:30:58.409 "name": null, 00:30:58.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:58.409 "is_configured": false, 00:30:58.409 "data_offset": 2048, 00:30:58.409 "data_size": 63488 00:30:58.409 }, 00:30:58.409 { 00:30:58.409 "name": null, 00:30:58.409 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:58.409 "is_configured": false, 00:30:58.409 "data_offset": 2048, 00:30:58.409 "data_size": 63488 00:30:58.409 } 00:30:58.409 ] 00:30:58.409 }' 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:58.409 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.669 [2024-12-06 18:29:29.538806] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:30:58.669 [2024-12-06 18:29:29.538882] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.669 [2024-12-06 18:29:29.538907] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:30:58.669 [2024-12-06 18:29:29.538920] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.669 [2024-12-06 18:29:29.539428] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.669 [2024-12-06 18:29:29.539450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:30:58.669 [2024-12-06 18:29:29.539535] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:30:58.669 [2024-12-06 18:29:29.539558] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:30:58.669 pt2 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.669 [2024-12-06 18:29:29.550762] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:30:58.669 [2024-12-06 18:29:29.550818] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.669 [2024-12-06 18:29:29.550841] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:30:58.669 [2024-12-06 18:29:29.550853] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.669 [2024-12-06 18:29:29.551274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.669 [2024-12-06 18:29:29.551301] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:30:58.669 [2024-12-06 18:29:29.551376] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:30:58.669 [2024-12-06 18:29:29.551397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:30:58.669 pt3 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.669 [2024-12-06 18:29:29.562716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:30:58.669 [2024-12-06 18:29:29.562762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:30:58.669 [2024-12-06 18:29:29.562782] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:30:58.669 [2024-12-06 18:29:29.562794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:30:58.669 [2024-12-06 18:29:29.563230] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:30:58.669 [2024-12-06 18:29:29.563251] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:30:58.669 [2024-12-06 18:29:29.563323] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:30:58.669 [2024-12-06 18:29:29.563350] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:30:58.669 [2024-12-06 18:29:29.563535] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:30:58.669 [2024-12-06 18:29:29.563548] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:30:58.669 [2024-12-06 18:29:29.563845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:30:58.669 [2024-12-06 18:29:29.564050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:30:58.669 [2024-12-06 18:29:29.564067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:30:58.669 [2024-12-06 18:29:29.564233] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:30:58.669 pt4 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:58.669 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:58.928 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:58.928 "name": "raid_bdev1", 00:30:58.928 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:58.928 "strip_size_kb": 0, 00:30:58.928 "state": "online", 00:30:58.928 "raid_level": "raid1", 00:30:58.928 "superblock": true, 00:30:58.928 "num_base_bdevs": 4, 00:30:58.928 "num_base_bdevs_discovered": 4, 00:30:58.928 "num_base_bdevs_operational": 4, 00:30:58.928 "base_bdevs_list": [ 00:30:58.928 { 00:30:58.928 "name": "pt1", 00:30:58.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:58.928 "is_configured": true, 00:30:58.928 "data_offset": 2048, 00:30:58.928 "data_size": 63488 00:30:58.928 }, 00:30:58.928 { 00:30:58.928 "name": "pt2", 00:30:58.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:58.928 "is_configured": true, 00:30:58.928 "data_offset": 2048, 00:30:58.928 "data_size": 63488 00:30:58.928 }, 00:30:58.928 { 00:30:58.928 "name": "pt3", 00:30:58.928 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:58.928 "is_configured": true, 00:30:58.928 "data_offset": 2048, 00:30:58.928 "data_size": 63488 00:30:58.928 }, 00:30:58.928 { 00:30:58.928 "name": "pt4", 00:30:58.928 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:58.928 "is_configured": true, 00:30:58.928 "data_offset": 2048, 00:30:58.928 "data_size": 63488 00:30:58.928 } 00:30:58.928 ] 00:30:58.928 }' 00:30:58.928 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:58.928 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.188 18:29:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.188 [2024-12-06 18:29:29.986616] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:59.188 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.188 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:30:59.188 "name": "raid_bdev1", 00:30:59.188 "aliases": [ 00:30:59.188 "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64" 00:30:59.188 ], 00:30:59.188 "product_name": "Raid Volume", 00:30:59.188 "block_size": 512, 00:30:59.188 "num_blocks": 63488, 00:30:59.188 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:59.188 "assigned_rate_limits": { 00:30:59.188 "rw_ios_per_sec": 0, 00:30:59.188 "rw_mbytes_per_sec": 0, 00:30:59.188 "r_mbytes_per_sec": 0, 00:30:59.188 "w_mbytes_per_sec": 0 00:30:59.188 }, 00:30:59.188 "claimed": false, 00:30:59.188 "zoned": false, 00:30:59.188 "supported_io_types": { 00:30:59.188 "read": true, 00:30:59.188 "write": true, 00:30:59.188 "unmap": false, 00:30:59.188 "flush": false, 00:30:59.188 "reset": true, 00:30:59.188 "nvme_admin": false, 00:30:59.188 "nvme_io": false, 00:30:59.188 "nvme_io_md": false, 00:30:59.188 "write_zeroes": true, 00:30:59.188 "zcopy": false, 00:30:59.188 "get_zone_info": false, 00:30:59.188 "zone_management": false, 00:30:59.188 "zone_append": false, 00:30:59.188 "compare": false, 00:30:59.188 "compare_and_write": false, 00:30:59.188 "abort": false, 00:30:59.188 "seek_hole": false, 00:30:59.188 "seek_data": false, 00:30:59.188 "copy": false, 00:30:59.188 "nvme_iov_md": false 00:30:59.188 }, 00:30:59.188 "memory_domains": [ 00:30:59.188 { 00:30:59.188 "dma_device_id": "system", 00:30:59.188 "dma_device_type": 1 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:59.188 "dma_device_type": 2 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "dma_device_id": "system", 00:30:59.188 "dma_device_type": 1 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:59.188 "dma_device_type": 2 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "dma_device_id": "system", 00:30:59.188 "dma_device_type": 1 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:59.188 "dma_device_type": 2 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "dma_device_id": "system", 00:30:59.188 "dma_device_type": 1 00:30:59.188 }, 00:30:59.188 { 00:30:59.188 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:30:59.188 "dma_device_type": 2 00:30:59.188 } 00:30:59.188 ], 00:30:59.188 "driver_specific": { 00:30:59.188 "raid": { 00:30:59.188 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:59.188 "strip_size_kb": 0, 00:30:59.188 "state": "online", 00:30:59.188 "raid_level": "raid1", 00:30:59.188 "superblock": true, 00:30:59.188 "num_base_bdevs": 4, 00:30:59.188 "num_base_bdevs_discovered": 4, 00:30:59.188 "num_base_bdevs_operational": 4, 00:30:59.188 "base_bdevs_list": [ 00:30:59.188 { 00:30:59.188 "name": "pt1", 00:30:59.189 "uuid": "00000000-0000-0000-0000-000000000001", 00:30:59.189 "is_configured": true, 00:30:59.189 "data_offset": 2048, 00:30:59.189 "data_size": 63488 00:30:59.189 }, 00:30:59.189 { 00:30:59.189 "name": "pt2", 00:30:59.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:59.189 "is_configured": true, 00:30:59.189 "data_offset": 2048, 00:30:59.189 "data_size": 63488 00:30:59.189 }, 00:30:59.189 { 00:30:59.189 "name": "pt3", 00:30:59.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:59.189 "is_configured": true, 00:30:59.189 "data_offset": 2048, 00:30:59.189 "data_size": 63488 00:30:59.189 }, 00:30:59.189 { 00:30:59.189 "name": "pt4", 00:30:59.189 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:59.189 "is_configured": true, 00:30:59.189 "data_offset": 2048, 00:30:59.189 "data_size": 63488 00:30:59.189 } 00:30:59.189 ] 00:30:59.189 } 00:30:59.189 } 00:30:59.189 }' 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:30:59.189 pt2 00:30:59.189 pt3 00:30:59.189 pt4' 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.189 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.459 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.460 [2024-12-06 18:29:30.294173] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' d9ffbc25-b70d-4d44-acce-7bacd4f2ba64 '!=' d9ffbc25-b70d-4d44-acce-7bacd4f2ba64 ']' 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.460 [2024-12-06 18:29:30.329916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:30:59.460 "name": "raid_bdev1", 00:30:59.460 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:30:59.460 "strip_size_kb": 0, 00:30:59.460 "state": "online", 00:30:59.460 "raid_level": "raid1", 00:30:59.460 "superblock": true, 00:30:59.460 "num_base_bdevs": 4, 00:30:59.460 "num_base_bdevs_discovered": 3, 00:30:59.460 "num_base_bdevs_operational": 3, 00:30:59.460 "base_bdevs_list": [ 00:30:59.460 { 00:30:59.460 "name": null, 00:30:59.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:30:59.460 "is_configured": false, 00:30:59.460 "data_offset": 0, 00:30:59.460 "data_size": 63488 00:30:59.460 }, 00:30:59.460 { 00:30:59.460 "name": "pt2", 00:30:59.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:30:59.460 "is_configured": true, 00:30:59.460 "data_offset": 2048, 00:30:59.460 "data_size": 63488 00:30:59.460 }, 00:30:59.460 { 00:30:59.460 "name": "pt3", 00:30:59.460 "uuid": "00000000-0000-0000-0000-000000000003", 00:30:59.460 "is_configured": true, 00:30:59.460 "data_offset": 2048, 00:30:59.460 "data_size": 63488 00:30:59.460 }, 00:30:59.460 { 00:30:59.460 "name": "pt4", 00:30:59.460 "uuid": "00000000-0000-0000-0000-000000000004", 00:30:59.460 "is_configured": true, 00:30:59.460 "data_offset": 2048, 00:30:59.460 "data_size": 63488 00:30:59.460 } 00:30:59.460 ] 00:30:59.460 }' 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:30:59.460 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.040 [2024-12-06 18:29:30.805885] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:00.040 [2024-12-06 18:29:30.805926] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:00.040 [2024-12-06 18:29:30.806018] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:00.040 [2024-12-06 18:29:30.806141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:00.040 [2024-12-06 18:29:30.806174] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:00.040 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.041 [2024-12-06 18:29:30.897911] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:31:00.041 [2024-12-06 18:29:30.897981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:00.041 [2024-12-06 18:29:30.898005] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:31:00.041 [2024-12-06 18:29:30.898017] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:00.041 [2024-12-06 18:29:30.901029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:00.041 [2024-12-06 18:29:30.901073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:31:00.041 [2024-12-06 18:29:30.901187] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:31:00.041 [2024-12-06 18:29:30.901259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:00.041 pt2 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:00.041 "name": "raid_bdev1", 00:31:00.041 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:31:00.041 "strip_size_kb": 0, 00:31:00.041 "state": "configuring", 00:31:00.041 "raid_level": "raid1", 00:31:00.041 "superblock": true, 00:31:00.041 "num_base_bdevs": 4, 00:31:00.041 "num_base_bdevs_discovered": 1, 00:31:00.041 "num_base_bdevs_operational": 3, 00:31:00.041 "base_bdevs_list": [ 00:31:00.041 { 00:31:00.041 "name": null, 00:31:00.041 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.041 "is_configured": false, 00:31:00.041 "data_offset": 2048, 00:31:00.041 "data_size": 63488 00:31:00.041 }, 00:31:00.041 { 00:31:00.041 "name": "pt2", 00:31:00.041 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:00.041 "is_configured": true, 00:31:00.041 "data_offset": 2048, 00:31:00.041 "data_size": 63488 00:31:00.041 }, 00:31:00.041 { 00:31:00.041 "name": null, 00:31:00.041 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:00.041 "is_configured": false, 00:31:00.041 "data_offset": 2048, 00:31:00.041 "data_size": 63488 00:31:00.041 }, 00:31:00.041 { 00:31:00.041 "name": null, 00:31:00.041 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:00.041 "is_configured": false, 00:31:00.041 "data_offset": 2048, 00:31:00.041 "data_size": 63488 00:31:00.041 } 00:31:00.041 ] 00:31:00.041 }' 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:00.041 18:29:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.607 [2024-12-06 18:29:31.361918] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:31:00.607 [2024-12-06 18:29:31.361989] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:00.607 [2024-12-06 18:29:31.362013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:31:00.607 [2024-12-06 18:29:31.362025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:00.607 [2024-12-06 18:29:31.362501] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:00.607 [2024-12-06 18:29:31.362521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:31:00.607 [2024-12-06 18:29:31.362610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:31:00.607 [2024-12-06 18:29:31.362631] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:00.607 pt3 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:00.607 "name": "raid_bdev1", 00:31:00.607 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:31:00.607 "strip_size_kb": 0, 00:31:00.607 "state": "configuring", 00:31:00.607 "raid_level": "raid1", 00:31:00.607 "superblock": true, 00:31:00.607 "num_base_bdevs": 4, 00:31:00.607 "num_base_bdevs_discovered": 2, 00:31:00.607 "num_base_bdevs_operational": 3, 00:31:00.607 "base_bdevs_list": [ 00:31:00.607 { 00:31:00.607 "name": null, 00:31:00.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:00.607 "is_configured": false, 00:31:00.607 "data_offset": 2048, 00:31:00.607 "data_size": 63488 00:31:00.607 }, 00:31:00.607 { 00:31:00.607 "name": "pt2", 00:31:00.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:00.607 "is_configured": true, 00:31:00.607 "data_offset": 2048, 00:31:00.607 "data_size": 63488 00:31:00.607 }, 00:31:00.607 { 00:31:00.607 "name": "pt3", 00:31:00.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:00.607 "is_configured": true, 00:31:00.607 "data_offset": 2048, 00:31:00.607 "data_size": 63488 00:31:00.607 }, 00:31:00.607 { 00:31:00.607 "name": null, 00:31:00.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:00.607 "is_configured": false, 00:31:00.607 "data_offset": 2048, 00:31:00.607 "data_size": 63488 00:31:00.607 } 00:31:00.607 ] 00:31:00.607 }' 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:00.607 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.865 [2024-12-06 18:29:31.793935] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:00.865 [2024-12-06 18:29:31.794007] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:00.865 [2024-12-06 18:29:31.794037] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:31:00.865 [2024-12-06 18:29:31.794049] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:00.865 [2024-12-06 18:29:31.794522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:00.865 [2024-12-06 18:29:31.794548] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:00.865 [2024-12-06 18:29:31.794633] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:31:00.865 [2024-12-06 18:29:31.794655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:00.865 [2024-12-06 18:29:31.794786] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:31:00.865 [2024-12-06 18:29:31.794796] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:00.865 [2024-12-06 18:29:31.795044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:00.865 [2024-12-06 18:29:31.795208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:31:00.865 [2024-12-06 18:29:31.795223] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:31:00.865 [2024-12-06 18:29:31.795357] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:00.865 pt4 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:00.865 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.122 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.122 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:01.122 "name": "raid_bdev1", 00:31:01.122 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:31:01.122 "strip_size_kb": 0, 00:31:01.122 "state": "online", 00:31:01.122 "raid_level": "raid1", 00:31:01.122 "superblock": true, 00:31:01.122 "num_base_bdevs": 4, 00:31:01.122 "num_base_bdevs_discovered": 3, 00:31:01.122 "num_base_bdevs_operational": 3, 00:31:01.122 "base_bdevs_list": [ 00:31:01.122 { 00:31:01.122 "name": null, 00:31:01.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.122 "is_configured": false, 00:31:01.122 "data_offset": 2048, 00:31:01.122 "data_size": 63488 00:31:01.122 }, 00:31:01.122 { 00:31:01.122 "name": "pt2", 00:31:01.122 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:01.122 "is_configured": true, 00:31:01.122 "data_offset": 2048, 00:31:01.122 "data_size": 63488 00:31:01.122 }, 00:31:01.122 { 00:31:01.122 "name": "pt3", 00:31:01.122 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:01.122 "is_configured": true, 00:31:01.122 "data_offset": 2048, 00:31:01.122 "data_size": 63488 00:31:01.122 }, 00:31:01.122 { 00:31:01.122 "name": "pt4", 00:31:01.122 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:01.122 "is_configured": true, 00:31:01.122 "data_offset": 2048, 00:31:01.122 "data_size": 63488 00:31:01.122 } 00:31:01.122 ] 00:31:01.122 }' 00:31:01.122 18:29:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:01.122 18:29:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.414 [2024-12-06 18:29:32.269886] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:01.414 [2024-12-06 18:29:32.269921] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:01.414 [2024-12-06 18:29:32.270001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:01.414 [2024-12-06 18:29:32.270099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:01.414 [2024-12-06 18:29:32.270116] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.414 [2024-12-06 18:29:32.321876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:31:01.414 [2024-12-06 18:29:32.321941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.414 [2024-12-06 18:29:32.321962] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:31:01.414 [2024-12-06 18:29:32.321978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.414 [2024-12-06 18:29:32.324494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.414 [2024-12-06 18:29:32.324534] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:31:01.414 [2024-12-06 18:29:32.324613] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:31:01.414 [2024-12-06 18:29:32.324658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:31:01.414 [2024-12-06 18:29:32.324783] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:31:01.414 [2024-12-06 18:29:32.324804] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:01.414 [2024-12-06 18:29:32.324820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:31:01.414 [2024-12-06 18:29:32.324886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:31:01.414 [2024-12-06 18:29:32.324984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:31:01.414 pt1 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.414 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.672 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.672 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:01.672 "name": "raid_bdev1", 00:31:01.672 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:31:01.672 "strip_size_kb": 0, 00:31:01.672 "state": "configuring", 00:31:01.672 "raid_level": "raid1", 00:31:01.672 "superblock": true, 00:31:01.672 "num_base_bdevs": 4, 00:31:01.672 "num_base_bdevs_discovered": 2, 00:31:01.672 "num_base_bdevs_operational": 3, 00:31:01.672 "base_bdevs_list": [ 00:31:01.672 { 00:31:01.672 "name": null, 00:31:01.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:01.672 "is_configured": false, 00:31:01.672 "data_offset": 2048, 00:31:01.672 "data_size": 63488 00:31:01.672 }, 00:31:01.672 { 00:31:01.672 "name": "pt2", 00:31:01.672 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:01.672 "is_configured": true, 00:31:01.672 "data_offset": 2048, 00:31:01.672 "data_size": 63488 00:31:01.672 }, 00:31:01.672 { 00:31:01.672 "name": "pt3", 00:31:01.672 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:01.672 "is_configured": true, 00:31:01.672 "data_offset": 2048, 00:31:01.672 "data_size": 63488 00:31:01.672 }, 00:31:01.672 { 00:31:01.672 "name": null, 00:31:01.672 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:01.672 "is_configured": false, 00:31:01.672 "data_offset": 2048, 00:31:01.672 "data_size": 63488 00:31:01.672 } 00:31:01.672 ] 00:31:01.672 }' 00:31:01.672 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:01.672 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.929 [2024-12-06 18:29:32.837892] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:31:01.929 [2024-12-06 18:29:32.837956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:01.929 [2024-12-06 18:29:32.837981] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:31:01.929 [2024-12-06 18:29:32.837993] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:01.929 [2024-12-06 18:29:32.838449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:01.929 [2024-12-06 18:29:32.838487] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:31:01.929 [2024-12-06 18:29:32.838581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:31:01.929 [2024-12-06 18:29:32.838604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:31:01.929 [2024-12-06 18:29:32.838765] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:31:01.929 [2024-12-06 18:29:32.838775] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:01.929 [2024-12-06 18:29:32.839051] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:01.929 [2024-12-06 18:29:32.839190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:31:01.929 [2024-12-06 18:29:32.839219] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:31:01.929 [2024-12-06 18:29:32.839371] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:01.929 pt4 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:01.929 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.188 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:02.188 "name": "raid_bdev1", 00:31:02.188 "uuid": "d9ffbc25-b70d-4d44-acce-7bacd4f2ba64", 00:31:02.188 "strip_size_kb": 0, 00:31:02.188 "state": "online", 00:31:02.188 "raid_level": "raid1", 00:31:02.188 "superblock": true, 00:31:02.188 "num_base_bdevs": 4, 00:31:02.188 "num_base_bdevs_discovered": 3, 00:31:02.188 "num_base_bdevs_operational": 3, 00:31:02.188 "base_bdevs_list": [ 00:31:02.188 { 00:31:02.188 "name": null, 00:31:02.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:02.188 "is_configured": false, 00:31:02.188 "data_offset": 2048, 00:31:02.188 "data_size": 63488 00:31:02.188 }, 00:31:02.188 { 00:31:02.188 "name": "pt2", 00:31:02.188 "uuid": "00000000-0000-0000-0000-000000000002", 00:31:02.188 "is_configured": true, 00:31:02.188 "data_offset": 2048, 00:31:02.188 "data_size": 63488 00:31:02.188 }, 00:31:02.188 { 00:31:02.188 "name": "pt3", 00:31:02.188 "uuid": "00000000-0000-0000-0000-000000000003", 00:31:02.188 "is_configured": true, 00:31:02.188 "data_offset": 2048, 00:31:02.188 "data_size": 63488 00:31:02.188 }, 00:31:02.188 { 00:31:02.188 "name": "pt4", 00:31:02.188 "uuid": "00000000-0000-0000-0000-000000000004", 00:31:02.188 "is_configured": true, 00:31:02.188 "data_offset": 2048, 00:31:02.188 "data_size": 63488 00:31:02.188 } 00:31:02.188 ] 00:31:02.188 }' 00:31:02.188 18:29:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:02.188 18:29:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:02.447 [2024-12-06 18:29:33.282235] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' d9ffbc25-b70d-4d44-acce-7bacd4f2ba64 '!=' d9ffbc25-b70d-4d44-acce-7bacd4f2ba64 ']' 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74238 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74238 ']' 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74238 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74238 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:02.447 killing process with pid 74238 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74238' 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74238 00:31:02.447 [2024-12-06 18:29:33.366174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:02.447 [2024-12-06 18:29:33.366263] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:02.447 [2024-12-06 18:29:33.366337] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:02.447 [2024-12-06 18:29:33.366353] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:31:02.447 18:29:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74238 00:31:03.014 [2024-12-06 18:29:33.769995] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:03.969 18:29:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:31:03.969 00:31:03.969 real 0m8.413s 00:31:03.969 user 0m13.137s 00:31:03.969 sys 0m1.838s 00:31:03.969 18:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:03.969 ************************************ 00:31:03.969 END TEST raid_superblock_test 00:31:03.969 ************************************ 00:31:03.969 18:29:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.228 18:29:34 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:31:04.228 18:29:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:04.228 18:29:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:04.228 18:29:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:04.228 ************************************ 00:31:04.228 START TEST raid_read_error_test 00:31:04.228 ************************************ 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:31:04.228 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:31:04.229 18:29:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RtnPBVhOzn 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74726 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74726 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 74726 ']' 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.229 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:04.229 [2024-12-06 18:29:35.119754] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:31:04.229 [2024-12-06 18:29:35.119938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74726 ] 00:31:04.487 [2024-12-06 18:29:35.310544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:04.487 [2024-12-06 18:29:35.427305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.746 [2024-12-06 18:29:35.638409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:04.746 [2024-12-06 18:29:35.638471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:05.314 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.314 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:31:05.314 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:05.314 18:29:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:05.314 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 BaseBdev1_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 true 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 [2024-12-06 18:29:36.045820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:31:05.314 [2024-12-06 18:29:36.045875] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.314 [2024-12-06 18:29:36.045898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:05.314 [2024-12-06 18:29:36.045913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.314 [2024-12-06 18:29:36.048250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.314 [2024-12-06 18:29:36.048290] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:05.314 BaseBdev1 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 BaseBdev2_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 true 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 [2024-12-06 18:29:36.114773] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:31:05.314 [2024-12-06 18:29:36.114826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.314 [2024-12-06 18:29:36.114844] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:05.314 [2024-12-06 18:29:36.114858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.314 [2024-12-06 18:29:36.117198] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.314 [2024-12-06 18:29:36.117236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:05.314 BaseBdev2 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 BaseBdev3_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 true 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 [2024-12-06 18:29:36.204238] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:31:05.314 [2024-12-06 18:29:36.204287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.314 [2024-12-06 18:29:36.204305] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:05.314 [2024-12-06 18:29:36.204318] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.314 [2024-12-06 18:29:36.206674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.314 [2024-12-06 18:29:36.206715] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:05.314 BaseBdev3 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 BaseBdev4_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.314 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.314 true 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.573 [2024-12-06 18:29:36.268547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:31:05.573 [2024-12-06 18:29:36.268597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:05.573 [2024-12-06 18:29:36.268616] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:05.573 [2024-12-06 18:29:36.268630] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:05.573 [2024-12-06 18:29:36.270965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:05.573 [2024-12-06 18:29:36.271007] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:05.573 BaseBdev4 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.573 [2024-12-06 18:29:36.280584] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:05.573 [2024-12-06 18:29:36.282648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:05.573 [2024-12-06 18:29:36.282726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:05.573 [2024-12-06 18:29:36.282788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:05.573 [2024-12-06 18:29:36.283020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:31:05.573 [2024-12-06 18:29:36.283035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:05.573 [2024-12-06 18:29:36.283295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:31:05.573 [2024-12-06 18:29:36.283481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:31:05.573 [2024-12-06 18:29:36.283500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:31:05.573 [2024-12-06 18:29:36.283657] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:05.573 "name": "raid_bdev1", 00:31:05.573 "uuid": "8e94b930-9064-415b-9fae-5e1407aa2dae", 00:31:05.573 "strip_size_kb": 0, 00:31:05.573 "state": "online", 00:31:05.573 "raid_level": "raid1", 00:31:05.573 "superblock": true, 00:31:05.573 "num_base_bdevs": 4, 00:31:05.573 "num_base_bdevs_discovered": 4, 00:31:05.573 "num_base_bdevs_operational": 4, 00:31:05.573 "base_bdevs_list": [ 00:31:05.573 { 00:31:05.573 "name": "BaseBdev1", 00:31:05.573 "uuid": "68378ddd-c309-59a9-89bf-5eaa6dae3102", 00:31:05.573 "is_configured": true, 00:31:05.573 "data_offset": 2048, 00:31:05.573 "data_size": 63488 00:31:05.573 }, 00:31:05.573 { 00:31:05.573 "name": "BaseBdev2", 00:31:05.573 "uuid": "26544d55-f53e-5fa3-b0cb-ffb50e86240c", 00:31:05.573 "is_configured": true, 00:31:05.573 "data_offset": 2048, 00:31:05.573 "data_size": 63488 00:31:05.573 }, 00:31:05.573 { 00:31:05.573 "name": "BaseBdev3", 00:31:05.573 "uuid": "f65441e1-bb2a-54ed-878a-43137587bcbe", 00:31:05.573 "is_configured": true, 00:31:05.573 "data_offset": 2048, 00:31:05.573 "data_size": 63488 00:31:05.573 }, 00:31:05.573 { 00:31:05.573 "name": "BaseBdev4", 00:31:05.573 "uuid": "24baa39b-f6df-57c2-bc92-33972c73f16d", 00:31:05.573 "is_configured": true, 00:31:05.573 "data_offset": 2048, 00:31:05.573 "data_size": 63488 00:31:05.573 } 00:31:05.573 ] 00:31:05.573 }' 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:05.573 18:29:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:05.832 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:31:05.832 18:29:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:06.089 [2024-12-06 18:29:36.837715] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:07.028 "name": "raid_bdev1", 00:31:07.028 "uuid": "8e94b930-9064-415b-9fae-5e1407aa2dae", 00:31:07.028 "strip_size_kb": 0, 00:31:07.028 "state": "online", 00:31:07.028 "raid_level": "raid1", 00:31:07.028 "superblock": true, 00:31:07.028 "num_base_bdevs": 4, 00:31:07.028 "num_base_bdevs_discovered": 4, 00:31:07.028 "num_base_bdevs_operational": 4, 00:31:07.028 "base_bdevs_list": [ 00:31:07.028 { 00:31:07.028 "name": "BaseBdev1", 00:31:07.028 "uuid": "68378ddd-c309-59a9-89bf-5eaa6dae3102", 00:31:07.028 "is_configured": true, 00:31:07.028 "data_offset": 2048, 00:31:07.028 "data_size": 63488 00:31:07.028 }, 00:31:07.028 { 00:31:07.028 "name": "BaseBdev2", 00:31:07.028 "uuid": "26544d55-f53e-5fa3-b0cb-ffb50e86240c", 00:31:07.028 "is_configured": true, 00:31:07.028 "data_offset": 2048, 00:31:07.028 "data_size": 63488 00:31:07.028 }, 00:31:07.028 { 00:31:07.028 "name": "BaseBdev3", 00:31:07.028 "uuid": "f65441e1-bb2a-54ed-878a-43137587bcbe", 00:31:07.028 "is_configured": true, 00:31:07.028 "data_offset": 2048, 00:31:07.028 "data_size": 63488 00:31:07.028 }, 00:31:07.028 { 00:31:07.028 "name": "BaseBdev4", 00:31:07.028 "uuid": "24baa39b-f6df-57c2-bc92-33972c73f16d", 00:31:07.028 "is_configured": true, 00:31:07.028 "data_offset": 2048, 00:31:07.028 "data_size": 63488 00:31:07.028 } 00:31:07.028 ] 00:31:07.028 }' 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:07.028 18:29:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.287 18:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:07.287 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.287 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:07.287 [2024-12-06 18:29:38.232028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:07.287 [2024-12-06 18:29:38.232073] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:07.546 [2024-12-06 18:29:38.235017] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:07.546 [2024-12-06 18:29:38.235089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:07.546 [2024-12-06 18:29:38.235221] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:07.546 [2024-12-06 18:29:38.235238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:31:07.546 { 00:31:07.546 "results": [ 00:31:07.546 { 00:31:07.546 "job": "raid_bdev1", 00:31:07.546 "core_mask": "0x1", 00:31:07.546 "workload": "randrw", 00:31:07.546 "percentage": 50, 00:31:07.547 "status": "finished", 00:31:07.547 "queue_depth": 1, 00:31:07.547 "io_size": 131072, 00:31:07.547 "runtime": 1.394732, 00:31:07.547 "iops": 10913.20769868333, 00:31:07.547 "mibps": 1364.1509623354164, 00:31:07.547 "io_failed": 0, 00:31:07.547 "io_timeout": 0, 00:31:07.547 "avg_latency_us": 88.80494033159114, 00:31:07.547 "min_latency_us": 24.880321285140564, 00:31:07.547 "max_latency_us": 1552.8610441767069 00:31:07.547 } 00:31:07.547 ], 00:31:07.547 "core_count": 1 00:31:07.547 } 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74726 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 74726 ']' 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 74726 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74726 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:07.547 killing process with pid 74726 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74726' 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 74726 00:31:07.547 [2024-12-06 18:29:38.283661] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:07.547 18:29:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 74726 00:31:07.806 [2024-12-06 18:29:38.610675] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RtnPBVhOzn 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:31:09.200 00:31:09.200 real 0m4.848s 00:31:09.200 user 0m5.710s 00:31:09.200 sys 0m0.684s 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.200 18:29:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.200 ************************************ 00:31:09.200 END TEST raid_read_error_test 00:31:09.200 ************************************ 00:31:09.200 18:29:39 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:31:09.200 18:29:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:31:09.200 18:29:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.200 18:29:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:09.200 ************************************ 00:31:09.200 START TEST raid_write_error_test 00:31:09.200 ************************************ 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:31:09.200 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.0fC3PoRLxm 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=74872 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 74872 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 74872 ']' 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:09.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:09.201 18:29:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:09.201 [2024-12-06 18:29:40.043825] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:31:09.201 [2024-12-06 18:29:40.043980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74872 ] 00:31:09.460 [2024-12-06 18:29:40.234443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:09.460 [2024-12-06 18:29:40.356122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:09.720 [2024-12-06 18:29:40.566626] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:09.720 [2024-12-06 18:29:40.566673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:09.979 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.979 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:31:09.979 18:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:09.979 18:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:09.979 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:09.979 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.239 BaseBdev1_malloc 00:31:10.239 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.239 18:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:31:10.239 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.239 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.239 true 00:31:10.239 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.239 18:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:31:10.239 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.239 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.239 [2024-12-06 18:29:40.945483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:31:10.239 [2024-12-06 18:29:40.945537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.240 [2024-12-06 18:29:40.945559] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:31:10.240 [2024-12-06 18:29:40.945573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.240 [2024-12-06 18:29:40.947921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.240 [2024-12-06 18:29:40.947962] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:10.240 BaseBdev1 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 BaseBdev2_malloc 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 true 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 [2024-12-06 18:29:41.013005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:31:10.240 [2024-12-06 18:29:41.013056] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.240 [2024-12-06 18:29:41.013074] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:31:10.240 [2024-12-06 18:29:41.013087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.240 [2024-12-06 18:29:41.015443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.240 [2024-12-06 18:29:41.015483] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:10.240 BaseBdev2 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 BaseBdev3_malloc 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 true 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 [2024-12-06 18:29:41.092286] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:31:10.240 [2024-12-06 18:29:41.092332] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.240 [2024-12-06 18:29:41.092350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:31:10.240 [2024-12-06 18:29:41.092364] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.240 [2024-12-06 18:29:41.094731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.240 [2024-12-06 18:29:41.094772] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:31:10.240 BaseBdev3 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 BaseBdev4_malloc 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 true 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 [2024-12-06 18:29:41.161559] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:31:10.240 [2024-12-06 18:29:41.161610] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:10.240 [2024-12-06 18:29:41.161628] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:10.240 [2024-12-06 18:29:41.161642] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:10.240 [2024-12-06 18:29:41.163993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:10.240 [2024-12-06 18:29:41.164035] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:31:10.240 BaseBdev4 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.240 [2024-12-06 18:29:41.173599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:10.240 [2024-12-06 18:29:41.175655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:10.240 [2024-12-06 18:29:41.175736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:31:10.240 [2024-12-06 18:29:41.175799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:31:10.240 [2024-12-06 18:29:41.176028] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:31:10.240 [2024-12-06 18:29:41.176045] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:10.240 [2024-12-06 18:29:41.176311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:31:10.240 [2024-12-06 18:29:41.176481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:31:10.240 [2024-12-06 18:29:41.176500] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:31:10.240 [2024-12-06 18:29:41.176656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:10.240 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:10.499 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:10.499 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.499 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:10.499 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:10.499 "name": "raid_bdev1", 00:31:10.499 "uuid": "d033d51e-6a87-4906-b0df-0bd0f12ecd6a", 00:31:10.499 "strip_size_kb": 0, 00:31:10.499 "state": "online", 00:31:10.499 "raid_level": "raid1", 00:31:10.499 "superblock": true, 00:31:10.499 "num_base_bdevs": 4, 00:31:10.499 "num_base_bdevs_discovered": 4, 00:31:10.499 "num_base_bdevs_operational": 4, 00:31:10.499 "base_bdevs_list": [ 00:31:10.499 { 00:31:10.499 "name": "BaseBdev1", 00:31:10.499 "uuid": "ba18b99f-4bd3-5187-9888-5d27ee84b9ed", 00:31:10.499 "is_configured": true, 00:31:10.499 "data_offset": 2048, 00:31:10.499 "data_size": 63488 00:31:10.499 }, 00:31:10.499 { 00:31:10.499 "name": "BaseBdev2", 00:31:10.499 "uuid": "a79c380f-f794-549d-8bf7-93a4cbf638fa", 00:31:10.499 "is_configured": true, 00:31:10.499 "data_offset": 2048, 00:31:10.499 "data_size": 63488 00:31:10.499 }, 00:31:10.499 { 00:31:10.499 "name": "BaseBdev3", 00:31:10.499 "uuid": "f9daea1e-bf82-5841-a352-e513d46dfd14", 00:31:10.499 "is_configured": true, 00:31:10.499 "data_offset": 2048, 00:31:10.499 "data_size": 63488 00:31:10.499 }, 00:31:10.499 { 00:31:10.499 "name": "BaseBdev4", 00:31:10.499 "uuid": "3d13a25a-da41-5f65-b69d-388b634a2de7", 00:31:10.499 "is_configured": true, 00:31:10.499 "data_offset": 2048, 00:31:10.499 "data_size": 63488 00:31:10.499 } 00:31:10.499 ] 00:31:10.499 }' 00:31:10.499 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:10.499 18:29:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:10.758 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:10.758 18:29:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:31:10.758 [2024-12-06 18:29:41.702222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:31:11.697 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:31:11.697 18:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.697 18:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.697 [2024-12-06 18:29:42.625466] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:31:11.697 [2024-12-06 18:29:42.625526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:11.697 [2024-12-06 18:29:42.625757] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:31:11.697 18:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.697 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:31:11.697 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:31:11.697 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:31:11.697 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:11.698 18:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:11.957 18:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:11.957 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:11.957 "name": "raid_bdev1", 00:31:11.957 "uuid": "d033d51e-6a87-4906-b0df-0bd0f12ecd6a", 00:31:11.957 "strip_size_kb": 0, 00:31:11.957 "state": "online", 00:31:11.957 "raid_level": "raid1", 00:31:11.957 "superblock": true, 00:31:11.957 "num_base_bdevs": 4, 00:31:11.957 "num_base_bdevs_discovered": 3, 00:31:11.957 "num_base_bdevs_operational": 3, 00:31:11.957 "base_bdevs_list": [ 00:31:11.957 { 00:31:11.957 "name": null, 00:31:11.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:11.957 "is_configured": false, 00:31:11.957 "data_offset": 0, 00:31:11.957 "data_size": 63488 00:31:11.957 }, 00:31:11.957 { 00:31:11.957 "name": "BaseBdev2", 00:31:11.957 "uuid": "a79c380f-f794-549d-8bf7-93a4cbf638fa", 00:31:11.957 "is_configured": true, 00:31:11.957 "data_offset": 2048, 00:31:11.957 "data_size": 63488 00:31:11.957 }, 00:31:11.957 { 00:31:11.957 "name": "BaseBdev3", 00:31:11.957 "uuid": "f9daea1e-bf82-5841-a352-e513d46dfd14", 00:31:11.957 "is_configured": true, 00:31:11.957 "data_offset": 2048, 00:31:11.957 "data_size": 63488 00:31:11.957 }, 00:31:11.957 { 00:31:11.957 "name": "BaseBdev4", 00:31:11.957 "uuid": "3d13a25a-da41-5f65-b69d-388b634a2de7", 00:31:11.957 "is_configured": true, 00:31:11.957 "data_offset": 2048, 00:31:11.957 "data_size": 63488 00:31:11.957 } 00:31:11.957 ] 00:31:11.957 }' 00:31:11.957 18:29:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:11.957 18:29:42 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.216 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:12.216 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:12.216 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:12.216 [2024-12-06 18:29:43.073653] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:12.216 [2024-12-06 18:29:43.073695] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:12.216 [2024-12-06 18:29:43.076448] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:12.216 [2024-12-06 18:29:43.076502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:12.216 [2024-12-06 18:29:43.076604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:12.216 [2024-12-06 18:29:43.076620] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:31:12.216 { 00:31:12.216 "results": [ 00:31:12.216 { 00:31:12.216 "job": "raid_bdev1", 00:31:12.216 "core_mask": "0x1", 00:31:12.216 "workload": "randrw", 00:31:12.216 "percentage": 50, 00:31:12.216 "status": "finished", 00:31:12.216 "queue_depth": 1, 00:31:12.216 "io_size": 131072, 00:31:12.216 "runtime": 1.371782, 00:31:12.216 "iops": 11950.8784923552, 00:31:12.216 "mibps": 1493.8598115444, 00:31:12.216 "io_failed": 0, 00:31:12.216 "io_timeout": 0, 00:31:12.216 "avg_latency_us": 80.90604682974916, 00:31:12.216 "min_latency_us": 24.469076305220884, 00:31:12.217 "max_latency_us": 1454.1622489959839 00:31:12.217 } 00:31:12.217 ], 00:31:12.217 "core_count": 1 00:31:12.217 } 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 74872 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 74872 ']' 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 74872 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74872 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:12.217 killing process with pid 74872 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74872' 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 74872 00:31:12.217 [2024-12-06 18:29:43.118975] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:12.217 18:29:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 74872 00:31:12.785 [2024-12-06 18:29:43.447003] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.0fC3PoRLxm 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:31:13.723 00:31:13.723 real 0m4.735s 00:31:13.723 user 0m5.530s 00:31:13.723 sys 0m0.661s 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:13.723 18:29:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.723 ************************************ 00:31:13.723 END TEST raid_write_error_test 00:31:13.723 ************************************ 00:31:13.983 18:29:44 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:31:13.983 18:29:44 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:31:13.983 18:29:44 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:31:13.983 18:29:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:13.983 18:29:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:13.983 18:29:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:13.983 ************************************ 00:31:13.983 START TEST raid_rebuild_test 00:31:13.983 ************************************ 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75020 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75020 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75020 ']' 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:13.983 18:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:13.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:13.984 18:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:13.984 18:29:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:13.984 [2024-12-06 18:29:44.844342] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:31:13.984 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:13.984 Zero copy mechanism will not be used. 00:31:13.984 [2024-12-06 18:29:44.844655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75020 ] 00:31:14.244 [2024-12-06 18:29:45.015646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.244 [2024-12-06 18:29:45.127344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.502 [2024-12-06 18:29:45.325178] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:14.502 [2024-12-06 18:29:45.325412] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:14.760 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:14.761 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:31:14.761 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:14.761 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:14.761 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:14.761 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.019 BaseBdev1_malloc 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.020 [2024-12-06 18:29:45.756957] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:15.020 [2024-12-06 18:29:45.757024] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.020 [2024-12-06 18:29:45.757047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:15.020 [2024-12-06 18:29:45.757062] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.020 [2024-12-06 18:29:45.759519] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.020 [2024-12-06 18:29:45.759566] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:15.020 BaseBdev1 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.020 BaseBdev2_malloc 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.020 [2024-12-06 18:29:45.813320] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:15.020 [2024-12-06 18:29:45.813383] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.020 [2024-12-06 18:29:45.813408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:15.020 [2024-12-06 18:29:45.813422] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.020 [2024-12-06 18:29:45.815762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.020 [2024-12-06 18:29:45.815806] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:15.020 BaseBdev2 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.020 spare_malloc 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.020 spare_delay 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.020 [2024-12-06 18:29:45.895622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:15.020 [2024-12-06 18:29:45.895686] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:15.020 [2024-12-06 18:29:45.895706] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:15.020 [2024-12-06 18:29:45.895720] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:15.020 [2024-12-06 18:29:45.898323] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:15.020 [2024-12-06 18:29:45.898491] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:15.020 spare 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.020 [2024-12-06 18:29:45.907662] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:15.020 [2024-12-06 18:29:45.909692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:15.020 [2024-12-06 18:29:45.909785] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:15.020 [2024-12-06 18:29:45.909802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:15.020 [2024-12-06 18:29:45.910096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:15.020 [2024-12-06 18:29:45.910271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:15.020 [2024-12-06 18:29:45.910301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:15.020 [2024-12-06 18:29:45.910463] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:15.020 "name": "raid_bdev1", 00:31:15.020 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:15.020 "strip_size_kb": 0, 00:31:15.020 "state": "online", 00:31:15.020 "raid_level": "raid1", 00:31:15.020 "superblock": false, 00:31:15.020 "num_base_bdevs": 2, 00:31:15.020 "num_base_bdevs_discovered": 2, 00:31:15.020 "num_base_bdevs_operational": 2, 00:31:15.020 "base_bdevs_list": [ 00:31:15.020 { 00:31:15.020 "name": "BaseBdev1", 00:31:15.020 "uuid": "f902189b-329c-5871-839a-d2e18db76ff7", 00:31:15.020 "is_configured": true, 00:31:15.020 "data_offset": 0, 00:31:15.020 "data_size": 65536 00:31:15.020 }, 00:31:15.020 { 00:31:15.020 "name": "BaseBdev2", 00:31:15.020 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:15.020 "is_configured": true, 00:31:15.020 "data_offset": 0, 00:31:15.020 "data_size": 65536 00:31:15.020 } 00:31:15.020 ] 00:31:15.020 }' 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:15.020 18:29:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:15.589 [2024-12-06 18:29:46.343927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:15.589 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:15.849 [2024-12-06 18:29:46.623396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:15.849 /dev/nbd0 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:15.849 1+0 records in 00:31:15.849 1+0 records out 00:31:15.849 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315687 s, 13.0 MB/s 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:31:15.849 18:29:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:31:22.419 65536+0 records in 00:31:22.419 65536+0 records out 00:31:22.419 33554432 bytes (34 MB, 32 MiB) copied, 5.70488 s, 5.9 MB/s 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:22.419 [2024-12-06 18:29:52.642594] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.419 [2024-12-06 18:29:52.666360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:22.419 "name": "raid_bdev1", 00:31:22.419 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:22.419 "strip_size_kb": 0, 00:31:22.419 "state": "online", 00:31:22.419 "raid_level": "raid1", 00:31:22.419 "superblock": false, 00:31:22.419 "num_base_bdevs": 2, 00:31:22.419 "num_base_bdevs_discovered": 1, 00:31:22.419 "num_base_bdevs_operational": 1, 00:31:22.419 "base_bdevs_list": [ 00:31:22.419 { 00:31:22.419 "name": null, 00:31:22.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:22.419 "is_configured": false, 00:31:22.419 "data_offset": 0, 00:31:22.419 "data_size": 65536 00:31:22.419 }, 00:31:22.419 { 00:31:22.419 "name": "BaseBdev2", 00:31:22.419 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:22.419 "is_configured": true, 00:31:22.419 "data_offset": 0, 00:31:22.419 "data_size": 65536 00:31:22.419 } 00:31:22.419 ] 00:31:22.419 }' 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:22.419 18:29:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.419 18:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:22.419 18:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:22.419 18:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:22.419 [2024-12-06 18:29:53.086061] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:22.419 [2024-12-06 18:29:53.103246] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:31:22.419 18:29:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:22.419 18:29:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:22.420 [2024-12-06 18:29:53.105481] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:23.358 "name": "raid_bdev1", 00:31:23.358 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:23.358 "strip_size_kb": 0, 00:31:23.358 "state": "online", 00:31:23.358 "raid_level": "raid1", 00:31:23.358 "superblock": false, 00:31:23.358 "num_base_bdevs": 2, 00:31:23.358 "num_base_bdevs_discovered": 2, 00:31:23.358 "num_base_bdevs_operational": 2, 00:31:23.358 "process": { 00:31:23.358 "type": "rebuild", 00:31:23.358 "target": "spare", 00:31:23.358 "progress": { 00:31:23.358 "blocks": 20480, 00:31:23.358 "percent": 31 00:31:23.358 } 00:31:23.358 }, 00:31:23.358 "base_bdevs_list": [ 00:31:23.358 { 00:31:23.358 "name": "spare", 00:31:23.358 "uuid": "879f226a-dc29-56f8-b58f-048a782149bb", 00:31:23.358 "is_configured": true, 00:31:23.358 "data_offset": 0, 00:31:23.358 "data_size": 65536 00:31:23.358 }, 00:31:23.358 { 00:31:23.358 "name": "BaseBdev2", 00:31:23.358 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:23.358 "is_configured": true, 00:31:23.358 "data_offset": 0, 00:31:23.358 "data_size": 65536 00:31:23.358 } 00:31:23.358 ] 00:31:23.358 }' 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.358 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.358 [2024-12-06 18:29:54.234026] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:23.618 [2024-12-06 18:29:54.310475] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:23.618 [2024-12-06 18:29:54.310553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:23.618 [2024-12-06 18:29:54.310571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:23.618 [2024-12-06 18:29:54.310583] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:23.618 "name": "raid_bdev1", 00:31:23.618 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:23.618 "strip_size_kb": 0, 00:31:23.618 "state": "online", 00:31:23.618 "raid_level": "raid1", 00:31:23.618 "superblock": false, 00:31:23.618 "num_base_bdevs": 2, 00:31:23.618 "num_base_bdevs_discovered": 1, 00:31:23.618 "num_base_bdevs_operational": 1, 00:31:23.618 "base_bdevs_list": [ 00:31:23.618 { 00:31:23.618 "name": null, 00:31:23.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:23.618 "is_configured": false, 00:31:23.618 "data_offset": 0, 00:31:23.618 "data_size": 65536 00:31:23.618 }, 00:31:23.618 { 00:31:23.618 "name": "BaseBdev2", 00:31:23.618 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:23.618 "is_configured": true, 00:31:23.618 "data_offset": 0, 00:31:23.618 "data_size": 65536 00:31:23.618 } 00:31:23.618 ] 00:31:23.618 }' 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:23.618 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:23.876 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:24.147 "name": "raid_bdev1", 00:31:24.147 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:24.147 "strip_size_kb": 0, 00:31:24.147 "state": "online", 00:31:24.147 "raid_level": "raid1", 00:31:24.147 "superblock": false, 00:31:24.147 "num_base_bdevs": 2, 00:31:24.147 "num_base_bdevs_discovered": 1, 00:31:24.147 "num_base_bdevs_operational": 1, 00:31:24.147 "base_bdevs_list": [ 00:31:24.147 { 00:31:24.147 "name": null, 00:31:24.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:24.147 "is_configured": false, 00:31:24.147 "data_offset": 0, 00:31:24.147 "data_size": 65536 00:31:24.147 }, 00:31:24.147 { 00:31:24.147 "name": "BaseBdev2", 00:31:24.147 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:24.147 "is_configured": true, 00:31:24.147 "data_offset": 0, 00:31:24.147 "data_size": 65536 00:31:24.147 } 00:31:24.147 ] 00:31:24.147 }' 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:24.147 [2024-12-06 18:29:54.914687] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:24.147 [2024-12-06 18:29:54.930873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.147 18:29:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:24.147 [2024-12-06 18:29:54.933124] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:25.081 "name": "raid_bdev1", 00:31:25.081 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:25.081 "strip_size_kb": 0, 00:31:25.081 "state": "online", 00:31:25.081 "raid_level": "raid1", 00:31:25.081 "superblock": false, 00:31:25.081 "num_base_bdevs": 2, 00:31:25.081 "num_base_bdevs_discovered": 2, 00:31:25.081 "num_base_bdevs_operational": 2, 00:31:25.081 "process": { 00:31:25.081 "type": "rebuild", 00:31:25.081 "target": "spare", 00:31:25.081 "progress": { 00:31:25.081 "blocks": 20480, 00:31:25.081 "percent": 31 00:31:25.081 } 00:31:25.081 }, 00:31:25.081 "base_bdevs_list": [ 00:31:25.081 { 00:31:25.081 "name": "spare", 00:31:25.081 "uuid": "879f226a-dc29-56f8-b58f-048a782149bb", 00:31:25.081 "is_configured": true, 00:31:25.081 "data_offset": 0, 00:31:25.081 "data_size": 65536 00:31:25.081 }, 00:31:25.081 { 00:31:25.081 "name": "BaseBdev2", 00:31:25.081 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:25.081 "is_configured": true, 00:31:25.081 "data_offset": 0, 00:31:25.081 "data_size": 65536 00:31:25.081 } 00:31:25.081 ] 00:31:25.081 }' 00:31:25.081 18:29:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:25.081 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.081 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=373 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:25.341 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:25.341 "name": "raid_bdev1", 00:31:25.341 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:25.341 "strip_size_kb": 0, 00:31:25.341 "state": "online", 00:31:25.341 "raid_level": "raid1", 00:31:25.341 "superblock": false, 00:31:25.341 "num_base_bdevs": 2, 00:31:25.341 "num_base_bdevs_discovered": 2, 00:31:25.341 "num_base_bdevs_operational": 2, 00:31:25.341 "process": { 00:31:25.341 "type": "rebuild", 00:31:25.341 "target": "spare", 00:31:25.341 "progress": { 00:31:25.341 "blocks": 22528, 00:31:25.341 "percent": 34 00:31:25.341 } 00:31:25.342 }, 00:31:25.342 "base_bdevs_list": [ 00:31:25.342 { 00:31:25.342 "name": "spare", 00:31:25.342 "uuid": "879f226a-dc29-56f8-b58f-048a782149bb", 00:31:25.342 "is_configured": true, 00:31:25.342 "data_offset": 0, 00:31:25.342 "data_size": 65536 00:31:25.342 }, 00:31:25.342 { 00:31:25.342 "name": "BaseBdev2", 00:31:25.342 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:25.342 "is_configured": true, 00:31:25.342 "data_offset": 0, 00:31:25.342 "data_size": 65536 00:31:25.342 } 00:31:25.342 ] 00:31:25.342 }' 00:31:25.342 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:25.342 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:25.342 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:25.342 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:25.342 18:29:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:26.280 18:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:26.540 18:29:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:26.540 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:26.540 "name": "raid_bdev1", 00:31:26.540 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:26.540 "strip_size_kb": 0, 00:31:26.540 "state": "online", 00:31:26.540 "raid_level": "raid1", 00:31:26.540 "superblock": false, 00:31:26.540 "num_base_bdevs": 2, 00:31:26.540 "num_base_bdevs_discovered": 2, 00:31:26.540 "num_base_bdevs_operational": 2, 00:31:26.540 "process": { 00:31:26.540 "type": "rebuild", 00:31:26.540 "target": "spare", 00:31:26.540 "progress": { 00:31:26.540 "blocks": 45056, 00:31:26.540 "percent": 68 00:31:26.540 } 00:31:26.540 }, 00:31:26.540 "base_bdevs_list": [ 00:31:26.540 { 00:31:26.540 "name": "spare", 00:31:26.540 "uuid": "879f226a-dc29-56f8-b58f-048a782149bb", 00:31:26.540 "is_configured": true, 00:31:26.540 "data_offset": 0, 00:31:26.540 "data_size": 65536 00:31:26.540 }, 00:31:26.540 { 00:31:26.540 "name": "BaseBdev2", 00:31:26.540 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:26.540 "is_configured": true, 00:31:26.540 "data_offset": 0, 00:31:26.540 "data_size": 65536 00:31:26.540 } 00:31:26.540 ] 00:31:26.540 }' 00:31:26.540 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:26.540 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:26.540 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:26.540 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:26.540 18:29:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:27.475 [2024-12-06 18:29:58.145909] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:27.475 [2024-12-06 18:29:58.146205] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:27.475 [2024-12-06 18:29:58.146296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.475 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:27.475 "name": "raid_bdev1", 00:31:27.475 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:27.475 "strip_size_kb": 0, 00:31:27.475 "state": "online", 00:31:27.475 "raid_level": "raid1", 00:31:27.476 "superblock": false, 00:31:27.476 "num_base_bdevs": 2, 00:31:27.476 "num_base_bdevs_discovered": 2, 00:31:27.476 "num_base_bdevs_operational": 2, 00:31:27.476 "base_bdevs_list": [ 00:31:27.476 { 00:31:27.476 "name": "spare", 00:31:27.476 "uuid": "879f226a-dc29-56f8-b58f-048a782149bb", 00:31:27.476 "is_configured": true, 00:31:27.476 "data_offset": 0, 00:31:27.476 "data_size": 65536 00:31:27.476 }, 00:31:27.476 { 00:31:27.476 "name": "BaseBdev2", 00:31:27.476 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:27.476 "is_configured": true, 00:31:27.476 "data_offset": 0, 00:31:27.476 "data_size": 65536 00:31:27.476 } 00:31:27.476 ] 00:31:27.476 }' 00:31:27.476 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:27.734 "name": "raid_bdev1", 00:31:27.734 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:27.734 "strip_size_kb": 0, 00:31:27.734 "state": "online", 00:31:27.734 "raid_level": "raid1", 00:31:27.734 "superblock": false, 00:31:27.734 "num_base_bdevs": 2, 00:31:27.734 "num_base_bdevs_discovered": 2, 00:31:27.734 "num_base_bdevs_operational": 2, 00:31:27.734 "base_bdevs_list": [ 00:31:27.734 { 00:31:27.734 "name": "spare", 00:31:27.734 "uuid": "879f226a-dc29-56f8-b58f-048a782149bb", 00:31:27.734 "is_configured": true, 00:31:27.734 "data_offset": 0, 00:31:27.734 "data_size": 65536 00:31:27.734 }, 00:31:27.734 { 00:31:27.734 "name": "BaseBdev2", 00:31:27.734 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:27.734 "is_configured": true, 00:31:27.734 "data_offset": 0, 00:31:27.734 "data_size": 65536 00:31:27.734 } 00:31:27.734 ] 00:31:27.734 }' 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:27.734 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:27.993 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:27.993 "name": "raid_bdev1", 00:31:27.993 "uuid": "a5bfbf4d-1134-4291-a3bd-f18c85952c7c", 00:31:27.993 "strip_size_kb": 0, 00:31:27.993 "state": "online", 00:31:27.993 "raid_level": "raid1", 00:31:27.993 "superblock": false, 00:31:27.993 "num_base_bdevs": 2, 00:31:27.993 "num_base_bdevs_discovered": 2, 00:31:27.993 "num_base_bdevs_operational": 2, 00:31:27.993 "base_bdevs_list": [ 00:31:27.993 { 00:31:27.993 "name": "spare", 00:31:27.993 "uuid": "879f226a-dc29-56f8-b58f-048a782149bb", 00:31:27.993 "is_configured": true, 00:31:27.993 "data_offset": 0, 00:31:27.993 "data_size": 65536 00:31:27.993 }, 00:31:27.993 { 00:31:27.993 "name": "BaseBdev2", 00:31:27.993 "uuid": "479da815-98d7-5056-9eb3-6bcdedf3999f", 00:31:27.993 "is_configured": true, 00:31:27.993 "data_offset": 0, 00:31:27.993 "data_size": 65536 00:31:27.993 } 00:31:27.993 ] 00:31:27.993 }' 00:31:27.993 18:29:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:27.993 18:29:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.252 [2024-12-06 18:29:59.086095] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:28.252 [2024-12-06 18:29:59.086275] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:28.252 [2024-12-06 18:29:59.086378] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:28.252 [2024-12-06 18:29:59.086447] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:28.252 [2024-12-06 18:29:59.086460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:28.252 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:28.511 /dev/nbd0 00:31:28.511 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:28.511 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:28.511 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:28.511 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:28.511 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:28.511 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:28.512 1+0 records in 00:31:28.512 1+0 records out 00:31:28.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204813 s, 20.0 MB/s 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:28.512 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:28.771 /dev/nbd1 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:28.771 1+0 records in 00:31:28.771 1+0 records out 00:31:28.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376347 s, 10.9 MB/s 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:28.771 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:31:29.031 18:29:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:29.031 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:29.031 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:29.031 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:29.031 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:31:29.031 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:29.031 18:29:59 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:29.290 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75020 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75020 ']' 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75020 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75020 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75020' 00:31:29.551 killing process with pid 75020 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75020 00:31:29.551 Received shutdown signal, test time was about 60.000000 seconds 00:31:29.551 00:31:29.551 Latency(us) 00:31:29.551 [2024-12-06T18:30:00.500Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:29.551 [2024-12-06T18:30:00.500Z] =================================================================================================================== 00:31:29.551 [2024-12-06T18:30:00.500Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:29.551 [2024-12-06 18:30:00.438850] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:29.551 18:30:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75020 00:31:29.811 [2024-12-06 18:30:00.743569] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:31:31.205 00:31:31.205 real 0m17.136s 00:31:31.205 user 0m18.261s 00:31:31.205 sys 0m4.321s 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:31.205 ************************************ 00:31:31.205 END TEST raid_rebuild_test 00:31:31.205 ************************************ 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:31:31.205 18:30:01 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:31:31.205 18:30:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:31.205 18:30:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:31.205 18:30:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:31.205 ************************************ 00:31:31.205 START TEST raid_rebuild_test_sb 00:31:31.205 ************************************ 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75456 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75456 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75456 ']' 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:31.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:31.205 18:30:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:31.205 [2024-12-06 18:30:02.079586] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:31:31.205 [2024-12-06 18:30:02.079746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75456 ] 00:31:31.205 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:31.205 Zero copy mechanism will not be used. 00:31:31.464 [2024-12-06 18:30:02.266224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:31.464 [2024-12-06 18:30:02.383009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:31.723 [2024-12-06 18:30:02.598279] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:31.723 [2024-12-06 18:30:02.598330] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:31.982 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:31.982 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:31:31.982 18:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:31.982 18:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:31.982 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:31.982 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.242 BaseBdev1_malloc 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.242 [2024-12-06 18:30:02.981795] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:32.242 [2024-12-06 18:30:02.981907] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.242 [2024-12-06 18:30:02.981946] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:32.242 [2024-12-06 18:30:02.981985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.242 [2024-12-06 18:30:02.984598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.242 [2024-12-06 18:30:02.984646] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:32.242 BaseBdev1 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.242 18:30:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.242 BaseBdev2_malloc 00:31:32.242 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.242 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:32.242 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.242 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.242 [2024-12-06 18:30:03.038592] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:32.243 [2024-12-06 18:30:03.038666] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.243 [2024-12-06 18:30:03.038691] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:32.243 [2024-12-06 18:30:03.038706] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.243 [2024-12-06 18:30:03.041119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.243 [2024-12-06 18:30:03.041178] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:32.243 BaseBdev2 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.243 spare_malloc 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.243 spare_delay 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.243 [2024-12-06 18:30:03.119825] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:32.243 [2024-12-06 18:30:03.119895] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:32.243 [2024-12-06 18:30:03.119933] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:32.243 [2024-12-06 18:30:03.119948] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:32.243 [2024-12-06 18:30:03.122329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:32.243 [2024-12-06 18:30:03.122378] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:32.243 spare 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.243 [2024-12-06 18:30:03.131893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:32.243 [2024-12-06 18:30:03.134024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:32.243 [2024-12-06 18:30:03.134218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:32.243 [2024-12-06 18:30:03.134236] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:32.243 [2024-12-06 18:30:03.134494] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:32.243 [2024-12-06 18:30:03.134671] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:32.243 [2024-12-06 18:30:03.134682] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:32.243 [2024-12-06 18:30:03.134839] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:32.243 "name": "raid_bdev1", 00:31:32.243 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:32.243 "strip_size_kb": 0, 00:31:32.243 "state": "online", 00:31:32.243 "raid_level": "raid1", 00:31:32.243 "superblock": true, 00:31:32.243 "num_base_bdevs": 2, 00:31:32.243 "num_base_bdevs_discovered": 2, 00:31:32.243 "num_base_bdevs_operational": 2, 00:31:32.243 "base_bdevs_list": [ 00:31:32.243 { 00:31:32.243 "name": "BaseBdev1", 00:31:32.243 "uuid": "88a10187-b1e6-5af8-98b0-79617c7c884a", 00:31:32.243 "is_configured": true, 00:31:32.243 "data_offset": 2048, 00:31:32.243 "data_size": 63488 00:31:32.243 }, 00:31:32.243 { 00:31:32.243 "name": "BaseBdev2", 00:31:32.243 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:32.243 "is_configured": true, 00:31:32.243 "data_offset": 2048, 00:31:32.243 "data_size": 63488 00:31:32.243 } 00:31:32.243 ] 00:31:32.243 }' 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:32.243 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.811 [2024-12-06 18:30:03.595570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:31:32.811 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:32.812 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:31:33.071 [2024-12-06 18:30:03.898895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:31:33.071 /dev/nbd0 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:33.071 1+0 records in 00:31:33.071 1+0 records out 00:31:33.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000407949 s, 10.0 MB/s 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:31:33.071 18:30:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:31:39.631 63488+0 records in 00:31:39.631 63488+0 records out 00:31:39.631 32505856 bytes (33 MB, 31 MiB) copied, 5.30778 s, 6.1 MB/s 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:39.631 [2024-12-06 18:30:09.521986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.631 [2024-12-06 18:30:09.562016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:39.631 "name": "raid_bdev1", 00:31:39.631 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:39.631 "strip_size_kb": 0, 00:31:39.631 "state": "online", 00:31:39.631 "raid_level": "raid1", 00:31:39.631 "superblock": true, 00:31:39.631 "num_base_bdevs": 2, 00:31:39.631 "num_base_bdevs_discovered": 1, 00:31:39.631 "num_base_bdevs_operational": 1, 00:31:39.631 "base_bdevs_list": [ 00:31:39.631 { 00:31:39.631 "name": null, 00:31:39.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:39.631 "is_configured": false, 00:31:39.631 "data_offset": 0, 00:31:39.631 "data_size": 63488 00:31:39.631 }, 00:31:39.631 { 00:31:39.631 "name": "BaseBdev2", 00:31:39.631 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:39.631 "is_configured": true, 00:31:39.631 "data_offset": 2048, 00:31:39.631 "data_size": 63488 00:31:39.631 } 00:31:39.631 ] 00:31:39.631 }' 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:39.631 [2024-12-06 18:30:09.977446] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:39.631 [2024-12-06 18:30:09.996035] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.631 18:30:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:39.631 [2024-12-06 18:30:09.998172] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:40.198 "name": "raid_bdev1", 00:31:40.198 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:40.198 "strip_size_kb": 0, 00:31:40.198 "state": "online", 00:31:40.198 "raid_level": "raid1", 00:31:40.198 "superblock": true, 00:31:40.198 "num_base_bdevs": 2, 00:31:40.198 "num_base_bdevs_discovered": 2, 00:31:40.198 "num_base_bdevs_operational": 2, 00:31:40.198 "process": { 00:31:40.198 "type": "rebuild", 00:31:40.198 "target": "spare", 00:31:40.198 "progress": { 00:31:40.198 "blocks": 20480, 00:31:40.198 "percent": 32 00:31:40.198 } 00:31:40.198 }, 00:31:40.198 "base_bdevs_list": [ 00:31:40.198 { 00:31:40.198 "name": "spare", 00:31:40.198 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:40.198 "is_configured": true, 00:31:40.198 "data_offset": 2048, 00:31:40.198 "data_size": 63488 00:31:40.198 }, 00:31:40.198 { 00:31:40.198 "name": "BaseBdev2", 00:31:40.198 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:40.198 "is_configured": true, 00:31:40.198 "data_offset": 2048, 00:31:40.198 "data_size": 63488 00:31:40.198 } 00:31:40.198 ] 00:31:40.198 }' 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.198 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.198 [2024-12-06 18:30:11.134178] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:40.456 [2024-12-06 18:30:11.203608] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:40.456 [2024-12-06 18:30:11.203736] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:40.456 [2024-12-06 18:30:11.203765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:40.456 [2024-12-06 18:30:11.203798] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:40.456 "name": "raid_bdev1", 00:31:40.456 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:40.456 "strip_size_kb": 0, 00:31:40.456 "state": "online", 00:31:40.456 "raid_level": "raid1", 00:31:40.456 "superblock": true, 00:31:40.456 "num_base_bdevs": 2, 00:31:40.456 "num_base_bdevs_discovered": 1, 00:31:40.456 "num_base_bdevs_operational": 1, 00:31:40.456 "base_bdevs_list": [ 00:31:40.456 { 00:31:40.456 "name": null, 00:31:40.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:40.456 "is_configured": false, 00:31:40.456 "data_offset": 0, 00:31:40.456 "data_size": 63488 00:31:40.456 }, 00:31:40.456 { 00:31:40.456 "name": "BaseBdev2", 00:31:40.456 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:40.456 "is_configured": true, 00:31:40.456 "data_offset": 2048, 00:31:40.456 "data_size": 63488 00:31:40.456 } 00:31:40.456 ] 00:31:40.456 }' 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:40.456 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:41.021 "name": "raid_bdev1", 00:31:41.021 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:41.021 "strip_size_kb": 0, 00:31:41.021 "state": "online", 00:31:41.021 "raid_level": "raid1", 00:31:41.021 "superblock": true, 00:31:41.021 "num_base_bdevs": 2, 00:31:41.021 "num_base_bdevs_discovered": 1, 00:31:41.021 "num_base_bdevs_operational": 1, 00:31:41.021 "base_bdevs_list": [ 00:31:41.021 { 00:31:41.021 "name": null, 00:31:41.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:41.021 "is_configured": false, 00:31:41.021 "data_offset": 0, 00:31:41.021 "data_size": 63488 00:31:41.021 }, 00:31:41.021 { 00:31:41.021 "name": "BaseBdev2", 00:31:41.021 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:41.021 "is_configured": true, 00:31:41.021 "data_offset": 2048, 00:31:41.021 "data_size": 63488 00:31:41.021 } 00:31:41.021 ] 00:31:41.021 }' 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.021 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.021 [2024-12-06 18:30:11.796009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:41.022 [2024-12-06 18:30:11.812512] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:31:41.022 18:30:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.022 18:30:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:41.022 [2024-12-06 18:30:11.814775] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:41.958 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:41.958 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:41.959 "name": "raid_bdev1", 00:31:41.959 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:41.959 "strip_size_kb": 0, 00:31:41.959 "state": "online", 00:31:41.959 "raid_level": "raid1", 00:31:41.959 "superblock": true, 00:31:41.959 "num_base_bdevs": 2, 00:31:41.959 "num_base_bdevs_discovered": 2, 00:31:41.959 "num_base_bdevs_operational": 2, 00:31:41.959 "process": { 00:31:41.959 "type": "rebuild", 00:31:41.959 "target": "spare", 00:31:41.959 "progress": { 00:31:41.959 "blocks": 20480, 00:31:41.959 "percent": 32 00:31:41.959 } 00:31:41.959 }, 00:31:41.959 "base_bdevs_list": [ 00:31:41.959 { 00:31:41.959 "name": "spare", 00:31:41.959 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:41.959 "is_configured": true, 00:31:41.959 "data_offset": 2048, 00:31:41.959 "data_size": 63488 00:31:41.959 }, 00:31:41.959 { 00:31:41.959 "name": "BaseBdev2", 00:31:41.959 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:41.959 "is_configured": true, 00:31:41.959 "data_offset": 2048, 00:31:41.959 "data_size": 63488 00:31:41.959 } 00:31:41.959 ] 00:31:41.959 }' 00:31:41.959 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:31:42.218 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=389 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:42.218 "name": "raid_bdev1", 00:31:42.218 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:42.218 "strip_size_kb": 0, 00:31:42.218 "state": "online", 00:31:42.218 "raid_level": "raid1", 00:31:42.218 "superblock": true, 00:31:42.218 "num_base_bdevs": 2, 00:31:42.218 "num_base_bdevs_discovered": 2, 00:31:42.218 "num_base_bdevs_operational": 2, 00:31:42.218 "process": { 00:31:42.218 "type": "rebuild", 00:31:42.218 "target": "spare", 00:31:42.218 "progress": { 00:31:42.218 "blocks": 22528, 00:31:42.218 "percent": 35 00:31:42.218 } 00:31:42.218 }, 00:31:42.218 "base_bdevs_list": [ 00:31:42.218 { 00:31:42.218 "name": "spare", 00:31:42.218 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:42.218 "is_configured": true, 00:31:42.218 "data_offset": 2048, 00:31:42.218 "data_size": 63488 00:31:42.218 }, 00:31:42.218 { 00:31:42.218 "name": "BaseBdev2", 00:31:42.218 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:42.218 "is_configured": true, 00:31:42.218 "data_offset": 2048, 00:31:42.218 "data_size": 63488 00:31:42.218 } 00:31:42.218 ] 00:31:42.218 }' 00:31:42.218 18:30:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:42.218 18:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:42.218 18:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:42.218 18:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:42.218 18:30:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.157 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:43.439 18:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:43.439 18:30:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.439 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:43.439 "name": "raid_bdev1", 00:31:43.439 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:43.439 "strip_size_kb": 0, 00:31:43.439 "state": "online", 00:31:43.439 "raid_level": "raid1", 00:31:43.439 "superblock": true, 00:31:43.439 "num_base_bdevs": 2, 00:31:43.439 "num_base_bdevs_discovered": 2, 00:31:43.439 "num_base_bdevs_operational": 2, 00:31:43.439 "process": { 00:31:43.439 "type": "rebuild", 00:31:43.439 "target": "spare", 00:31:43.439 "progress": { 00:31:43.439 "blocks": 45056, 00:31:43.439 "percent": 70 00:31:43.439 } 00:31:43.439 }, 00:31:43.439 "base_bdevs_list": [ 00:31:43.439 { 00:31:43.439 "name": "spare", 00:31:43.439 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:43.439 "is_configured": true, 00:31:43.439 "data_offset": 2048, 00:31:43.439 "data_size": 63488 00:31:43.439 }, 00:31:43.439 { 00:31:43.439 "name": "BaseBdev2", 00:31:43.439 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:43.439 "is_configured": true, 00:31:43.439 "data_offset": 2048, 00:31:43.439 "data_size": 63488 00:31:43.439 } 00:31:43.439 ] 00:31:43.439 }' 00:31:43.439 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:43.439 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:43.439 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:43.439 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:43.439 18:30:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:31:44.008 [2024-12-06 18:30:14.927761] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:31:44.008 [2024-12-06 18:30:14.928052] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:31:44.009 [2024-12-06 18:30:14.928209] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:44.577 "name": "raid_bdev1", 00:31:44.577 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:44.577 "strip_size_kb": 0, 00:31:44.577 "state": "online", 00:31:44.577 "raid_level": "raid1", 00:31:44.577 "superblock": true, 00:31:44.577 "num_base_bdevs": 2, 00:31:44.577 "num_base_bdevs_discovered": 2, 00:31:44.577 "num_base_bdevs_operational": 2, 00:31:44.577 "base_bdevs_list": [ 00:31:44.577 { 00:31:44.577 "name": "spare", 00:31:44.577 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:44.577 "is_configured": true, 00:31:44.577 "data_offset": 2048, 00:31:44.577 "data_size": 63488 00:31:44.577 }, 00:31:44.577 { 00:31:44.577 "name": "BaseBdev2", 00:31:44.577 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:44.577 "is_configured": true, 00:31:44.577 "data_offset": 2048, 00:31:44.577 "data_size": 63488 00:31:44.577 } 00:31:44.577 ] 00:31:44.577 }' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:44.577 "name": "raid_bdev1", 00:31:44.577 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:44.577 "strip_size_kb": 0, 00:31:44.577 "state": "online", 00:31:44.577 "raid_level": "raid1", 00:31:44.577 "superblock": true, 00:31:44.577 "num_base_bdevs": 2, 00:31:44.577 "num_base_bdevs_discovered": 2, 00:31:44.577 "num_base_bdevs_operational": 2, 00:31:44.577 "base_bdevs_list": [ 00:31:44.577 { 00:31:44.577 "name": "spare", 00:31:44.577 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:44.577 "is_configured": true, 00:31:44.577 "data_offset": 2048, 00:31:44.577 "data_size": 63488 00:31:44.577 }, 00:31:44.577 { 00:31:44.577 "name": "BaseBdev2", 00:31:44.577 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:44.577 "is_configured": true, 00:31:44.577 "data_offset": 2048, 00:31:44.577 "data_size": 63488 00:31:44.577 } 00:31:44.577 ] 00:31:44.577 }' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:44.577 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.836 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:44.836 "name": "raid_bdev1", 00:31:44.836 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:44.836 "strip_size_kb": 0, 00:31:44.836 "state": "online", 00:31:44.836 "raid_level": "raid1", 00:31:44.836 "superblock": true, 00:31:44.836 "num_base_bdevs": 2, 00:31:44.836 "num_base_bdevs_discovered": 2, 00:31:44.836 "num_base_bdevs_operational": 2, 00:31:44.836 "base_bdevs_list": [ 00:31:44.836 { 00:31:44.836 "name": "spare", 00:31:44.836 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:44.836 "is_configured": true, 00:31:44.836 "data_offset": 2048, 00:31:44.836 "data_size": 63488 00:31:44.836 }, 00:31:44.836 { 00:31:44.836 "name": "BaseBdev2", 00:31:44.836 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:44.836 "is_configured": true, 00:31:44.836 "data_offset": 2048, 00:31:44.836 "data_size": 63488 00:31:44.836 } 00:31:44.836 ] 00:31:44.836 }' 00:31:44.836 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:44.836 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.094 [2024-12-06 18:30:15.922682] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:31:45.094 [2024-12-06 18:30:15.922861] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:31:45.094 [2024-12-06 18:30:15.922965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:45.094 [2024-12-06 18:30:15.923037] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:45.094 [2024-12-06 18:30:15.923053] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:45.094 18:30:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:31:45.352 /dev/nbd0 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:45.352 1+0 records in 00:31:45.352 1+0 records out 00:31:45.352 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000321487 s, 12.7 MB/s 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:45.352 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:31:45.610 /dev/nbd1 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:31:45.610 1+0 records in 00:31:45.610 1+0 records out 00:31:45.610 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472024 s, 8.7 MB/s 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:31:45.610 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:31:45.868 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:31:45.868 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:31:45.868 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:31:45.868 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:31:45.868 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:31:45.868 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:45.868 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:31:46.127 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:31:46.128 18:30:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.387 [2024-12-06 18:30:17.263181] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:46.387 [2024-12-06 18:30:17.263237] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:46.387 [2024-12-06 18:30:17.263266] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:31:46.387 [2024-12-06 18:30:17.263278] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:46.387 [2024-12-06 18:30:17.265776] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:46.387 [2024-12-06 18:30:17.265970] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:46.387 [2024-12-06 18:30:17.266097] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:46.387 [2024-12-06 18:30:17.266169] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:46.387 [2024-12-06 18:30:17.266312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:46.387 spare 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.387 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.646 [2024-12-06 18:30:17.366242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:31:46.646 [2024-12-06 18:30:17.366289] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:31:46.646 [2024-12-06 18:30:17.366621] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:31:46.646 [2024-12-06 18:30:17.366834] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:31:46.646 [2024-12-06 18:30:17.366847] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:31:46.646 [2024-12-06 18:30:17.367103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.646 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:46.646 "name": "raid_bdev1", 00:31:46.646 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:46.646 "strip_size_kb": 0, 00:31:46.646 "state": "online", 00:31:46.646 "raid_level": "raid1", 00:31:46.646 "superblock": true, 00:31:46.646 "num_base_bdevs": 2, 00:31:46.646 "num_base_bdevs_discovered": 2, 00:31:46.646 "num_base_bdevs_operational": 2, 00:31:46.646 "base_bdevs_list": [ 00:31:46.646 { 00:31:46.646 "name": "spare", 00:31:46.646 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:46.646 "is_configured": true, 00:31:46.646 "data_offset": 2048, 00:31:46.646 "data_size": 63488 00:31:46.646 }, 00:31:46.646 { 00:31:46.646 "name": "BaseBdev2", 00:31:46.646 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:46.646 "is_configured": true, 00:31:46.646 "data_offset": 2048, 00:31:46.647 "data_size": 63488 00:31:46.647 } 00:31:46.647 ] 00:31:46.647 }' 00:31:46.647 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:46.647 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:46.906 "name": "raid_bdev1", 00:31:46.906 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:46.906 "strip_size_kb": 0, 00:31:46.906 "state": "online", 00:31:46.906 "raid_level": "raid1", 00:31:46.906 "superblock": true, 00:31:46.906 "num_base_bdevs": 2, 00:31:46.906 "num_base_bdevs_discovered": 2, 00:31:46.906 "num_base_bdevs_operational": 2, 00:31:46.906 "base_bdevs_list": [ 00:31:46.906 { 00:31:46.906 "name": "spare", 00:31:46.906 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:46.906 "is_configured": true, 00:31:46.906 "data_offset": 2048, 00:31:46.906 "data_size": 63488 00:31:46.906 }, 00:31:46.906 { 00:31:46.906 "name": "BaseBdev2", 00:31:46.906 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:46.906 "is_configured": true, 00:31:46.906 "data_offset": 2048, 00:31:46.906 "data_size": 63488 00:31:46.906 } 00:31:46.906 ] 00:31:46.906 }' 00:31:46.906 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.175 [2024-12-06 18:30:17.966312] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:47.175 18:30:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.175 18:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:47.175 "name": "raid_bdev1", 00:31:47.175 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:47.175 "strip_size_kb": 0, 00:31:47.175 "state": "online", 00:31:47.175 "raid_level": "raid1", 00:31:47.175 "superblock": true, 00:31:47.175 "num_base_bdevs": 2, 00:31:47.175 "num_base_bdevs_discovered": 1, 00:31:47.175 "num_base_bdevs_operational": 1, 00:31:47.175 "base_bdevs_list": [ 00:31:47.175 { 00:31:47.175 "name": null, 00:31:47.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:47.175 "is_configured": false, 00:31:47.175 "data_offset": 0, 00:31:47.175 "data_size": 63488 00:31:47.175 }, 00:31:47.175 { 00:31:47.175 "name": "BaseBdev2", 00:31:47.175 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:47.175 "is_configured": true, 00:31:47.175 "data_offset": 2048, 00:31:47.175 "data_size": 63488 00:31:47.175 } 00:31:47.175 ] 00:31:47.175 }' 00:31:47.175 18:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:47.175 18:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.767 18:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:47.767 18:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:47.767 18:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:47.767 [2024-12-06 18:30:18.410243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:47.767 [2024-12-06 18:30:18.410437] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:47.767 [2024-12-06 18:30:18.410457] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:47.767 [2024-12-06 18:30:18.410498] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:47.767 [2024-12-06 18:30:18.427820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:31:47.767 18:30:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:47.767 [2024-12-06 18:30:18.430490] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:47.767 18:30:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:48.705 "name": "raid_bdev1", 00:31:48.705 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:48.705 "strip_size_kb": 0, 00:31:48.705 "state": "online", 00:31:48.705 "raid_level": "raid1", 00:31:48.705 "superblock": true, 00:31:48.705 "num_base_bdevs": 2, 00:31:48.705 "num_base_bdevs_discovered": 2, 00:31:48.705 "num_base_bdevs_operational": 2, 00:31:48.705 "process": { 00:31:48.705 "type": "rebuild", 00:31:48.705 "target": "spare", 00:31:48.705 "progress": { 00:31:48.705 "blocks": 20480, 00:31:48.705 "percent": 32 00:31:48.705 } 00:31:48.705 }, 00:31:48.705 "base_bdevs_list": [ 00:31:48.705 { 00:31:48.705 "name": "spare", 00:31:48.705 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:48.705 "is_configured": true, 00:31:48.705 "data_offset": 2048, 00:31:48.705 "data_size": 63488 00:31:48.705 }, 00:31:48.705 { 00:31:48.705 "name": "BaseBdev2", 00:31:48.705 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:48.705 "is_configured": true, 00:31:48.705 "data_offset": 2048, 00:31:48.705 "data_size": 63488 00:31:48.705 } 00:31:48.705 ] 00:31:48.705 }' 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:48.705 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:48.706 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:48.706 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:31:48.706 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.706 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.706 [2024-12-06 18:30:19.562375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:48.706 [2024-12-06 18:30:19.635776] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:48.706 [2024-12-06 18:30:19.635852] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:48.706 [2024-12-06 18:30:19.635869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:48.706 [2024-12-06 18:30:19.635881] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:48.963 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:48.964 "name": "raid_bdev1", 00:31:48.964 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:48.964 "strip_size_kb": 0, 00:31:48.964 "state": "online", 00:31:48.964 "raid_level": "raid1", 00:31:48.964 "superblock": true, 00:31:48.964 "num_base_bdevs": 2, 00:31:48.964 "num_base_bdevs_discovered": 1, 00:31:48.964 "num_base_bdevs_operational": 1, 00:31:48.964 "base_bdevs_list": [ 00:31:48.964 { 00:31:48.964 "name": null, 00:31:48.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:48.964 "is_configured": false, 00:31:48.964 "data_offset": 0, 00:31:48.964 "data_size": 63488 00:31:48.964 }, 00:31:48.964 { 00:31:48.964 "name": "BaseBdev2", 00:31:48.964 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:48.964 "is_configured": true, 00:31:48.964 "data_offset": 2048, 00:31:48.964 "data_size": 63488 00:31:48.964 } 00:31:48.964 ] 00:31:48.964 }' 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:48.964 18:30:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.223 18:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:49.223 18:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.223 18:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:49.223 [2024-12-06 18:30:20.123769] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:49.223 [2024-12-06 18:30:20.123846] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:49.223 [2024-12-06 18:30:20.123870] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:31:49.223 [2024-12-06 18:30:20.123885] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:49.223 [2024-12-06 18:30:20.124367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:49.223 [2024-12-06 18:30:20.124395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:49.223 [2024-12-06 18:30:20.124526] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:31:49.223 [2024-12-06 18:30:20.124545] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:31:49.223 [2024-12-06 18:30:20.124556] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:31:49.223 [2024-12-06 18:30:20.124586] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:49.223 [2024-12-06 18:30:20.140493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:31:49.223 spare 00:31:49.223 18:30:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.223 18:30:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:31:49.223 [2024-12-06 18:30:20.142627] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:50.601 "name": "raid_bdev1", 00:31:50.601 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:50.601 "strip_size_kb": 0, 00:31:50.601 "state": "online", 00:31:50.601 "raid_level": "raid1", 00:31:50.601 "superblock": true, 00:31:50.601 "num_base_bdevs": 2, 00:31:50.601 "num_base_bdevs_discovered": 2, 00:31:50.601 "num_base_bdevs_operational": 2, 00:31:50.601 "process": { 00:31:50.601 "type": "rebuild", 00:31:50.601 "target": "spare", 00:31:50.601 "progress": { 00:31:50.601 "blocks": 20480, 00:31:50.601 "percent": 32 00:31:50.601 } 00:31:50.601 }, 00:31:50.601 "base_bdevs_list": [ 00:31:50.601 { 00:31:50.601 "name": "spare", 00:31:50.601 "uuid": "a8e31374-1aeb-57c1-a5f0-33903dd84aee", 00:31:50.601 "is_configured": true, 00:31:50.601 "data_offset": 2048, 00:31:50.601 "data_size": 63488 00:31:50.601 }, 00:31:50.601 { 00:31:50.601 "name": "BaseBdev2", 00:31:50.601 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:50.601 "is_configured": true, 00:31:50.601 "data_offset": 2048, 00:31:50.601 "data_size": 63488 00:31:50.601 } 00:31:50.601 ] 00:31:50.601 }' 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.601 [2024-12-06 18:30:21.298536] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:50.601 [2024-12-06 18:30:21.348039] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:50.601 [2024-12-06 18:30:21.348109] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:50.601 [2024-12-06 18:30:21.348129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:50.601 [2024-12-06 18:30:21.348138] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.601 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:50.601 "name": "raid_bdev1", 00:31:50.601 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:50.601 "strip_size_kb": 0, 00:31:50.601 "state": "online", 00:31:50.601 "raid_level": "raid1", 00:31:50.601 "superblock": true, 00:31:50.601 "num_base_bdevs": 2, 00:31:50.601 "num_base_bdevs_discovered": 1, 00:31:50.601 "num_base_bdevs_operational": 1, 00:31:50.601 "base_bdevs_list": [ 00:31:50.601 { 00:31:50.601 "name": null, 00:31:50.601 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:50.601 "is_configured": false, 00:31:50.601 "data_offset": 0, 00:31:50.601 "data_size": 63488 00:31:50.601 }, 00:31:50.601 { 00:31:50.601 "name": "BaseBdev2", 00:31:50.602 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:50.602 "is_configured": true, 00:31:50.602 "data_offset": 2048, 00:31:50.602 "data_size": 63488 00:31:50.602 } 00:31:50.602 ] 00:31:50.602 }' 00:31:50.602 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:50.602 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:51.168 "name": "raid_bdev1", 00:31:51.168 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:51.168 "strip_size_kb": 0, 00:31:51.168 "state": "online", 00:31:51.168 "raid_level": "raid1", 00:31:51.168 "superblock": true, 00:31:51.168 "num_base_bdevs": 2, 00:31:51.168 "num_base_bdevs_discovered": 1, 00:31:51.168 "num_base_bdevs_operational": 1, 00:31:51.168 "base_bdevs_list": [ 00:31:51.168 { 00:31:51.168 "name": null, 00:31:51.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:51.168 "is_configured": false, 00:31:51.168 "data_offset": 0, 00:31:51.168 "data_size": 63488 00:31:51.168 }, 00:31:51.168 { 00:31:51.168 "name": "BaseBdev2", 00:31:51.168 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:51.168 "is_configured": true, 00:31:51.168 "data_offset": 2048, 00:31:51.168 "data_size": 63488 00:31:51.168 } 00:31:51.168 ] 00:31:51.168 }' 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:51.168 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:51.169 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:51.169 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:31:51.169 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.169 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.169 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.169 18:30:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:51.169 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.169 18:30:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:51.169 [2024-12-06 18:30:22.000573] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:51.169 [2024-12-06 18:30:22.000645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:51.169 [2024-12-06 18:30:22.000677] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:31:51.169 [2024-12-06 18:30:22.000700] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:51.169 [2024-12-06 18:30:22.001177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:51.169 [2024-12-06 18:30:22.001200] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:51.169 [2024-12-06 18:30:22.001293] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:31:51.169 [2024-12-06 18:30:22.001307] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:51.169 [2024-12-06 18:30:22.001321] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:51.169 [2024-12-06 18:30:22.001333] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:31:51.169 BaseBdev1 00:31:51.169 18:30:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.169 18:30:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.105 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.364 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:52.364 "name": "raid_bdev1", 00:31:52.364 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:52.364 "strip_size_kb": 0, 00:31:52.364 "state": "online", 00:31:52.364 "raid_level": "raid1", 00:31:52.364 "superblock": true, 00:31:52.364 "num_base_bdevs": 2, 00:31:52.364 "num_base_bdevs_discovered": 1, 00:31:52.364 "num_base_bdevs_operational": 1, 00:31:52.364 "base_bdevs_list": [ 00:31:52.364 { 00:31:52.364 "name": null, 00:31:52.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:52.364 "is_configured": false, 00:31:52.364 "data_offset": 0, 00:31:52.364 "data_size": 63488 00:31:52.364 }, 00:31:52.364 { 00:31:52.364 "name": "BaseBdev2", 00:31:52.364 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:52.364 "is_configured": true, 00:31:52.364 "data_offset": 2048, 00:31:52.364 "data_size": 63488 00:31:52.364 } 00:31:52.364 ] 00:31:52.364 }' 00:31:52.364 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:52.364 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.624 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:52.624 "name": "raid_bdev1", 00:31:52.624 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:52.624 "strip_size_kb": 0, 00:31:52.624 "state": "online", 00:31:52.624 "raid_level": "raid1", 00:31:52.624 "superblock": true, 00:31:52.624 "num_base_bdevs": 2, 00:31:52.624 "num_base_bdevs_discovered": 1, 00:31:52.624 "num_base_bdevs_operational": 1, 00:31:52.624 "base_bdevs_list": [ 00:31:52.624 { 00:31:52.624 "name": null, 00:31:52.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:52.624 "is_configured": false, 00:31:52.624 "data_offset": 0, 00:31:52.624 "data_size": 63488 00:31:52.624 }, 00:31:52.625 { 00:31:52.625 "name": "BaseBdev2", 00:31:52.625 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:52.625 "is_configured": true, 00:31:52.625 "data_offset": 2048, 00:31:52.625 "data_size": 63488 00:31:52.625 } 00:31:52.625 ] 00:31:52.625 }' 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.625 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:52.625 [2024-12-06 18:30:23.563327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:52.625 [2024-12-06 18:30:23.563504] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:31:52.625 [2024-12-06 18:30:23.563523] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:31:52.625 request: 00:31:52.625 { 00:31:52.625 "base_bdev": "BaseBdev1", 00:31:52.625 "raid_bdev": "raid_bdev1", 00:31:52.625 "method": "bdev_raid_add_base_bdev", 00:31:52.625 "req_id": 1 00:31:52.625 } 00:31:52.625 Got JSON-RPC error response 00:31:52.625 response: 00:31:52.625 { 00:31:52.625 "code": -22, 00:31:52.625 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:31:52.625 } 00:31:52.884 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:52.884 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:31:52.884 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:52.884 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:52.884 18:30:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:52.884 18:30:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:53.821 "name": "raid_bdev1", 00:31:53.821 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:53.821 "strip_size_kb": 0, 00:31:53.821 "state": "online", 00:31:53.821 "raid_level": "raid1", 00:31:53.821 "superblock": true, 00:31:53.821 "num_base_bdevs": 2, 00:31:53.821 "num_base_bdevs_discovered": 1, 00:31:53.821 "num_base_bdevs_operational": 1, 00:31:53.821 "base_bdevs_list": [ 00:31:53.821 { 00:31:53.821 "name": null, 00:31:53.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:53.821 "is_configured": false, 00:31:53.821 "data_offset": 0, 00:31:53.821 "data_size": 63488 00:31:53.821 }, 00:31:53.821 { 00:31:53.821 "name": "BaseBdev2", 00:31:53.821 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:53.821 "is_configured": true, 00:31:53.821 "data_offset": 2048, 00:31:53.821 "data_size": 63488 00:31:53.821 } 00:31:53.821 ] 00:31:53.821 }' 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:53.821 18:30:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:54.389 "name": "raid_bdev1", 00:31:54.389 "uuid": "36d38983-a573-415f-9aaf-cd3cf56a5370", 00:31:54.389 "strip_size_kb": 0, 00:31:54.389 "state": "online", 00:31:54.389 "raid_level": "raid1", 00:31:54.389 "superblock": true, 00:31:54.389 "num_base_bdevs": 2, 00:31:54.389 "num_base_bdevs_discovered": 1, 00:31:54.389 "num_base_bdevs_operational": 1, 00:31:54.389 "base_bdevs_list": [ 00:31:54.389 { 00:31:54.389 "name": null, 00:31:54.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:54.389 "is_configured": false, 00:31:54.389 "data_offset": 0, 00:31:54.389 "data_size": 63488 00:31:54.389 }, 00:31:54.389 { 00:31:54.389 "name": "BaseBdev2", 00:31:54.389 "uuid": "0ede4a4f-7dcb-5e67-b18f-cd34dccd5e26", 00:31:54.389 "is_configured": true, 00:31:54.389 "data_offset": 2048, 00:31:54.389 "data_size": 63488 00:31:54.389 } 00:31:54.389 ] 00:31:54.389 }' 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75456 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75456 ']' 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75456 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75456 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:54.389 killing process with pid 75456 00:31:54.389 Received shutdown signal, test time was about 60.000000 seconds 00:31:54.389 00:31:54.389 Latency(us) 00:31:54.389 [2024-12-06T18:30:25.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.389 [2024-12-06T18:30:25.338Z] =================================================================================================================== 00:31:54.389 [2024-12-06T18:30:25.338Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:54.389 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75456' 00:31:54.390 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75456 00:31:54.390 [2024-12-06 18:30:25.196890] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:31:54.390 18:30:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75456 00:31:54.390 [2024-12-06 18:30:25.197020] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:31:54.390 [2024-12-06 18:30:25.197071] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:31:54.390 [2024-12-06 18:30:25.197086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:31:54.648 [2024-12-06 18:30:25.501442] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:31:56.072 00:31:56.072 real 0m24.663s 00:31:56.072 user 0m28.814s 00:31:56.072 sys 0m4.918s 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:56.072 ************************************ 00:31:56.072 END TEST raid_rebuild_test_sb 00:31:56.072 ************************************ 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:31:56.072 18:30:26 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:31:56.072 18:30:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:31:56.072 18:30:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.072 18:30:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:31:56.072 ************************************ 00:31:56.072 START TEST raid_rebuild_test_io 00:31:56.072 ************************************ 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76197 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76197 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76197 ']' 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.072 18:30:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:56.072 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:56.072 Zero copy mechanism will not be used. 00:31:56.072 [2024-12-06 18:30:26.821422] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:31:56.072 [2024-12-06 18:30:26.821552] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76197 ] 00:31:56.072 [2024-12-06 18:30:27.000344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.330 [2024-12-06 18:30:27.122659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.588 [2024-12-06 18:30:27.323773] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:56.588 [2024-12-06 18:30:27.323818] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:56.846 BaseBdev1_malloc 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:56.846 [2024-12-06 18:30:27.719707] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:31:56.846 [2024-12-06 18:30:27.719781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.846 [2024-12-06 18:30:27.719805] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:31:56.846 [2024-12-06 18:30:27.719822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.846 [2024-12-06 18:30:27.722217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.846 [2024-12-06 18:30:27.722266] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:31:56.846 BaseBdev1 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:56.846 BaseBdev2_malloc 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:56.846 [2024-12-06 18:30:27.776659] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:31:56.846 [2024-12-06 18:30:27.776732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:56.846 [2024-12-06 18:30:27.776759] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:31:56.846 [2024-12-06 18:30:27.776773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:56.846 [2024-12-06 18:30:27.779101] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:56.846 [2024-12-06 18:30:27.779160] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:31:56.846 BaseBdev2 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.846 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.132 spare_malloc 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.132 spare_delay 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.132 [2024-12-06 18:30:27.855705] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:31:57.132 [2024-12-06 18:30:27.855779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:31:57.132 [2024-12-06 18:30:27.855801] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:31:57.132 [2024-12-06 18:30:27.855815] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:31:57.132 [2024-12-06 18:30:27.858180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:31:57.132 [2024-12-06 18:30:27.858226] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:31:57.132 spare 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.132 [2024-12-06 18:30:27.867740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:31:57.132 [2024-12-06 18:30:27.869774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:31:57.132 [2024-12-06 18:30:27.869865] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:31:57.132 [2024-12-06 18:30:27.869882] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:31:57.132 [2024-12-06 18:30:27.870166] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:31:57.132 [2024-12-06 18:30:27.870330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:31:57.132 [2024-12-06 18:30:27.870343] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:31:57.132 [2024-12-06 18:30:27.870500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.132 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.132 "name": "raid_bdev1", 00:31:57.132 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:31:57.132 "strip_size_kb": 0, 00:31:57.132 "state": "online", 00:31:57.133 "raid_level": "raid1", 00:31:57.133 "superblock": false, 00:31:57.133 "num_base_bdevs": 2, 00:31:57.133 "num_base_bdevs_discovered": 2, 00:31:57.133 "num_base_bdevs_operational": 2, 00:31:57.133 "base_bdevs_list": [ 00:31:57.133 { 00:31:57.133 "name": "BaseBdev1", 00:31:57.133 "uuid": "c8ff9ca9-9fa6-5132-b2f5-216d504f0410", 00:31:57.133 "is_configured": true, 00:31:57.133 "data_offset": 0, 00:31:57.133 "data_size": 65536 00:31:57.133 }, 00:31:57.133 { 00:31:57.133 "name": "BaseBdev2", 00:31:57.133 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:31:57.133 "is_configured": true, 00:31:57.133 "data_offset": 0, 00:31:57.133 "data_size": 65536 00:31:57.133 } 00:31:57.133 ] 00:31:57.133 }' 00:31:57.133 18:30:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.133 18:30:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.391 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:31:57.391 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.391 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.391 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:31:57.391 [2024-12-06 18:30:28.327583] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.650 [2024-12-06 18:30:28.419303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:57.650 "name": "raid_bdev1", 00:31:57.650 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:31:57.650 "strip_size_kb": 0, 00:31:57.650 "state": "online", 00:31:57.650 "raid_level": "raid1", 00:31:57.650 "superblock": false, 00:31:57.650 "num_base_bdevs": 2, 00:31:57.650 "num_base_bdevs_discovered": 1, 00:31:57.650 "num_base_bdevs_operational": 1, 00:31:57.650 "base_bdevs_list": [ 00:31:57.650 { 00:31:57.650 "name": null, 00:31:57.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:57.650 "is_configured": false, 00:31:57.650 "data_offset": 0, 00:31:57.650 "data_size": 65536 00:31:57.650 }, 00:31:57.650 { 00:31:57.650 "name": "BaseBdev2", 00:31:57.650 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:31:57.650 "is_configured": true, 00:31:57.650 "data_offset": 0, 00:31:57.650 "data_size": 65536 00:31:57.650 } 00:31:57.650 ] 00:31:57.650 }' 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:57.650 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.650 I/O size of 3145728 is greater than zero copy threshold (65536). 00:31:57.650 Zero copy mechanism will not be used. 00:31:57.650 Running I/O for 60 seconds... 00:31:57.650 [2024-12-06 18:30:28.499905] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:31:57.909 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:57.909 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.909 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:57.909 [2024-12-06 18:30:28.843903] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:58.167 18:30:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.167 18:30:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:31:58.167 [2024-12-06 18:30:28.901377] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:31:58.167 [2024-12-06 18:30:28.903502] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:31:58.167 [2024-12-06 18:30:29.011343] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:58.167 [2024-12-06 18:30:29.011879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:31:58.425 [2024-12-06 18:30:29.125828] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:58.425 [2024-12-06 18:30:29.126163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:31:58.684 [2024-12-06 18:30:29.444552] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:58.684 [2024-12-06 18:30:29.445078] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:31:58.684 154.00 IOPS, 462.00 MiB/s [2024-12-06T18:30:29.633Z] [2024-12-06 18:30:29.570467] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:31:58.951 [2024-12-06 18:30:29.878644] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:31:58.952 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:31:58.952 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:58.952 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:31:58.952 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:31:58.952 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:59.211 "name": "raid_bdev1", 00:31:59.211 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:31:59.211 "strip_size_kb": 0, 00:31:59.211 "state": "online", 00:31:59.211 "raid_level": "raid1", 00:31:59.211 "superblock": false, 00:31:59.211 "num_base_bdevs": 2, 00:31:59.211 "num_base_bdevs_discovered": 2, 00:31:59.211 "num_base_bdevs_operational": 2, 00:31:59.211 "process": { 00:31:59.211 "type": "rebuild", 00:31:59.211 "target": "spare", 00:31:59.211 "progress": { 00:31:59.211 "blocks": 14336, 00:31:59.211 "percent": 21 00:31:59.211 } 00:31:59.211 }, 00:31:59.211 "base_bdevs_list": [ 00:31:59.211 { 00:31:59.211 "name": "spare", 00:31:59.211 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:31:59.211 "is_configured": true, 00:31:59.211 "data_offset": 0, 00:31:59.211 "data_size": 65536 00:31:59.211 }, 00:31:59.211 { 00:31:59.211 "name": "BaseBdev2", 00:31:59.211 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:31:59.211 "is_configured": true, 00:31:59.211 "data_offset": 0, 00:31:59.211 "data_size": 65536 00:31:59.211 } 00:31:59.211 ] 00:31:59.211 }' 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:31:59.211 18:30:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:59.211 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:31:59.211 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:31:59.211 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.211 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:59.211 [2024-12-06 18:30:30.043457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:59.211 [2024-12-06 18:30:30.087359] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:59.211 [2024-12-06 18:30:30.087697] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:31:59.211 [2024-12-06 18:30:30.094688] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:31:59.211 [2024-12-06 18:30:30.102461] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:31:59.211 [2024-12-06 18:30:30.102506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:31:59.211 [2024-12-06 18:30:30.102523] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:31:59.211 [2024-12-06 18:30:30.145528] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:31:59.469 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.469 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:31:59.469 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:31:59.469 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:31:59.469 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:31:59.469 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:31:59.469 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:31:59.469 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:31:59.470 "name": "raid_bdev1", 00:31:59.470 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:31:59.470 "strip_size_kb": 0, 00:31:59.470 "state": "online", 00:31:59.470 "raid_level": "raid1", 00:31:59.470 "superblock": false, 00:31:59.470 "num_base_bdevs": 2, 00:31:59.470 "num_base_bdevs_discovered": 1, 00:31:59.470 "num_base_bdevs_operational": 1, 00:31:59.470 "base_bdevs_list": [ 00:31:59.470 { 00:31:59.470 "name": null, 00:31:59.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.470 "is_configured": false, 00:31:59.470 "data_offset": 0, 00:31:59.470 "data_size": 65536 00:31:59.470 }, 00:31:59.470 { 00:31:59.470 "name": "BaseBdev2", 00:31:59.470 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:31:59.470 "is_configured": true, 00:31:59.470 "data_offset": 0, 00:31:59.470 "data_size": 65536 00:31:59.470 } 00:31:59.470 ] 00:31:59.470 }' 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:31:59.470 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:59.728 167.00 IOPS, 501.00 MiB/s [2024-12-06T18:30:30.677Z] 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:31:59.728 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:31:59.728 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:31:59.728 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:31:59.728 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:31:59.728 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:31:59.728 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:31:59.728 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.728 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:31:59.987 "name": "raid_bdev1", 00:31:59.987 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:31:59.987 "strip_size_kb": 0, 00:31:59.987 "state": "online", 00:31:59.987 "raid_level": "raid1", 00:31:59.987 "superblock": false, 00:31:59.987 "num_base_bdevs": 2, 00:31:59.987 "num_base_bdevs_discovered": 1, 00:31:59.987 "num_base_bdevs_operational": 1, 00:31:59.987 "base_bdevs_list": [ 00:31:59.987 { 00:31:59.987 "name": null, 00:31:59.987 "uuid": "00000000-0000-0000-0000-000000000000", 00:31:59.987 "is_configured": false, 00:31:59.987 "data_offset": 0, 00:31:59.987 "data_size": 65536 00:31:59.987 }, 00:31:59.987 { 00:31:59.987 "name": "BaseBdev2", 00:31:59.987 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:31:59.987 "is_configured": true, 00:31:59.987 "data_offset": 0, 00:31:59.987 "data_size": 65536 00:31:59.987 } 00:31:59.987 ] 00:31:59.987 }' 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:31:59.987 [2024-12-06 18:30:30.780890] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.987 18:30:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:31:59.987 [2024-12-06 18:30:30.838174] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:31:59.987 [2024-12-06 18:30:30.840341] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:00.246 [2024-12-06 18:30:30.952588] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:00.246 [2024-12-06 18:30:30.953163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:00.246 [2024-12-06 18:30:31.168163] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:00.246 [2024-12-06 18:30:31.168497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:00.814 162.67 IOPS, 488.00 MiB/s [2024-12-06T18:30:31.764Z] [2024-12-06 18:30:31.543126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:00.815 [2024-12-06 18:30:31.543472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:01.074 "name": "raid_bdev1", 00:32:01.074 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:32:01.074 "strip_size_kb": 0, 00:32:01.074 "state": "online", 00:32:01.074 "raid_level": "raid1", 00:32:01.074 "superblock": false, 00:32:01.074 "num_base_bdevs": 2, 00:32:01.074 "num_base_bdevs_discovered": 2, 00:32:01.074 "num_base_bdevs_operational": 2, 00:32:01.074 "process": { 00:32:01.074 "type": "rebuild", 00:32:01.074 "target": "spare", 00:32:01.074 "progress": { 00:32:01.074 "blocks": 12288, 00:32:01.074 "percent": 18 00:32:01.074 } 00:32:01.074 }, 00:32:01.074 "base_bdevs_list": [ 00:32:01.074 { 00:32:01.074 "name": "spare", 00:32:01.074 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:32:01.074 "is_configured": true, 00:32:01.074 "data_offset": 0, 00:32:01.074 "data_size": 65536 00:32:01.074 }, 00:32:01.074 { 00:32:01.074 "name": "BaseBdev2", 00:32:01.074 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:32:01.074 "is_configured": true, 00:32:01.074 "data_offset": 0, 00:32:01.074 "data_size": 65536 00:32:01.074 } 00:32:01.074 ] 00:32:01.074 }' 00:32:01.074 [2024-12-06 18:30:31.868233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:01.074 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=408 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:01.075 18:30:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.075 18:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:01.075 "name": "raid_bdev1", 00:32:01.075 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:32:01.075 "strip_size_kb": 0, 00:32:01.075 "state": "online", 00:32:01.075 "raid_level": "raid1", 00:32:01.075 "superblock": false, 00:32:01.075 "num_base_bdevs": 2, 00:32:01.075 "num_base_bdevs_discovered": 2, 00:32:01.075 "num_base_bdevs_operational": 2, 00:32:01.075 "process": { 00:32:01.075 "type": "rebuild", 00:32:01.075 "target": "spare", 00:32:01.075 "progress": { 00:32:01.075 "blocks": 14336, 00:32:01.075 "percent": 21 00:32:01.075 } 00:32:01.075 }, 00:32:01.075 "base_bdevs_list": [ 00:32:01.075 { 00:32:01.075 "name": "spare", 00:32:01.075 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:32:01.075 "is_configured": true, 00:32:01.075 "data_offset": 0, 00:32:01.075 "data_size": 65536 00:32:01.075 }, 00:32:01.075 { 00:32:01.075 "name": "BaseBdev2", 00:32:01.075 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:32:01.075 "is_configured": true, 00:32:01.075 "data_offset": 0, 00:32:01.075 "data_size": 65536 00:32:01.075 } 00:32:01.075 ] 00:32:01.075 }' 00:32:01.075 18:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:01.335 18:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:01.335 18:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:01.335 18:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:01.335 18:30:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:01.335 [2024-12-06 18:30:32.109646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:32:02.163 138.50 IOPS, 415.50 MiB/s [2024-12-06T18:30:33.112Z] [2024-12-06 18:30:32.912902] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:02.163 18:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:02.422 18:30:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:02.422 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:02.422 "name": "raid_bdev1", 00:32:02.422 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:32:02.422 "strip_size_kb": 0, 00:32:02.422 "state": "online", 00:32:02.422 "raid_level": "raid1", 00:32:02.422 "superblock": false, 00:32:02.422 "num_base_bdevs": 2, 00:32:02.422 "num_base_bdevs_discovered": 2, 00:32:02.422 "num_base_bdevs_operational": 2, 00:32:02.422 "process": { 00:32:02.422 "type": "rebuild", 00:32:02.422 "target": "spare", 00:32:02.422 "progress": { 00:32:02.422 "blocks": 28672, 00:32:02.422 "percent": 43 00:32:02.422 } 00:32:02.422 }, 00:32:02.422 "base_bdevs_list": [ 00:32:02.422 { 00:32:02.422 "name": "spare", 00:32:02.422 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:32:02.422 "is_configured": true, 00:32:02.422 "data_offset": 0, 00:32:02.422 "data_size": 65536 00:32:02.422 }, 00:32:02.422 { 00:32:02.422 "name": "BaseBdev2", 00:32:02.422 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:32:02.422 "is_configured": true, 00:32:02.422 "data_offset": 0, 00:32:02.422 "data_size": 65536 00:32:02.422 } 00:32:02.422 ] 00:32:02.422 }' 00:32:02.422 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:02.422 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:02.422 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:02.422 [2024-12-06 18:30:33.225402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:32:02.422 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:02.422 18:30:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:02.682 [2024-12-06 18:30:33.433677] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:32:02.682 [2024-12-06 18:30:33.434007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:32:02.942 119.20 IOPS, 357.60 MiB/s [2024-12-06T18:30:33.891Z] [2024-12-06 18:30:33.792954] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:32:02.942 [2024-12-06 18:30:33.793525] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:03.531 "name": "raid_bdev1", 00:32:03.531 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:32:03.531 "strip_size_kb": 0, 00:32:03.531 "state": "online", 00:32:03.531 "raid_level": "raid1", 00:32:03.531 "superblock": false, 00:32:03.531 "num_base_bdevs": 2, 00:32:03.531 "num_base_bdevs_discovered": 2, 00:32:03.531 "num_base_bdevs_operational": 2, 00:32:03.531 "process": { 00:32:03.531 "type": "rebuild", 00:32:03.531 "target": "spare", 00:32:03.531 "progress": { 00:32:03.531 "blocks": 43008, 00:32:03.531 "percent": 65 00:32:03.531 } 00:32:03.531 }, 00:32:03.531 "base_bdevs_list": [ 00:32:03.531 { 00:32:03.531 "name": "spare", 00:32:03.531 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:32:03.531 "is_configured": true, 00:32:03.531 "data_offset": 0, 00:32:03.531 "data_size": 65536 00:32:03.531 }, 00:32:03.531 { 00:32:03.531 "name": "BaseBdev2", 00:32:03.531 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:32:03.531 "is_configured": true, 00:32:03.531 "data_offset": 0, 00:32:03.531 "data_size": 65536 00:32:03.531 } 00:32:03.531 ] 00:32:03.531 }' 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:03.531 18:30:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:03.531 [2024-12-06 18:30:34.365370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:32:03.811 106.67 IOPS, 320.00 MiB/s [2024-12-06T18:30:34.760Z] [2024-12-06 18:30:34.682937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:32:04.070 [2024-12-06 18:30:35.012952] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:32:04.329 [2024-12-06 18:30:35.119845] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:04.589 "name": "raid_bdev1", 00:32:04.589 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:32:04.589 "strip_size_kb": 0, 00:32:04.589 "state": "online", 00:32:04.589 "raid_level": "raid1", 00:32:04.589 "superblock": false, 00:32:04.589 "num_base_bdevs": 2, 00:32:04.589 "num_base_bdevs_discovered": 2, 00:32:04.589 "num_base_bdevs_operational": 2, 00:32:04.589 "process": { 00:32:04.589 "type": "rebuild", 00:32:04.589 "target": "spare", 00:32:04.589 "progress": { 00:32:04.589 "blocks": 63488, 00:32:04.589 "percent": 96 00:32:04.589 } 00:32:04.589 }, 00:32:04.589 "base_bdevs_list": [ 00:32:04.589 { 00:32:04.589 "name": "spare", 00:32:04.589 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:32:04.589 "is_configured": true, 00:32:04.589 "data_offset": 0, 00:32:04.589 "data_size": 65536 00:32:04.589 }, 00:32:04.589 { 00:32:04.589 "name": "BaseBdev2", 00:32:04.589 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:32:04.589 "is_configured": true, 00:32:04.589 "data_offset": 0, 00:32:04.589 "data_size": 65536 00:32:04.589 } 00:32:04.589 ] 00:32:04.589 }' 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:04.589 [2024-12-06 18:30:35.456782] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:04.589 95.43 IOPS, 286.29 MiB/s [2024-12-06T18:30:35.538Z] 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:04.589 18:30:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:04.848 [2024-12-06 18:30:35.562298] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:04.848 [2024-12-06 18:30:35.565130] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:05.786 87.50 IOPS, 262.50 MiB/s [2024-12-06T18:30:36.735Z] 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.786 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:05.786 "name": "raid_bdev1", 00:32:05.786 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:32:05.786 "strip_size_kb": 0, 00:32:05.786 "state": "online", 00:32:05.786 "raid_level": "raid1", 00:32:05.786 "superblock": false, 00:32:05.786 "num_base_bdevs": 2, 00:32:05.786 "num_base_bdevs_discovered": 2, 00:32:05.786 "num_base_bdevs_operational": 2, 00:32:05.786 "base_bdevs_list": [ 00:32:05.786 { 00:32:05.786 "name": "spare", 00:32:05.786 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:32:05.786 "is_configured": true, 00:32:05.786 "data_offset": 0, 00:32:05.786 "data_size": 65536 00:32:05.786 }, 00:32:05.787 { 00:32:05.787 "name": "BaseBdev2", 00:32:05.787 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:32:05.787 "is_configured": true, 00:32:05.787 "data_offset": 0, 00:32:05.787 "data_size": 65536 00:32:05.787 } 00:32:05.787 ] 00:32:05.787 }' 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:05.787 "name": "raid_bdev1", 00:32:05.787 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:32:05.787 "strip_size_kb": 0, 00:32:05.787 "state": "online", 00:32:05.787 "raid_level": "raid1", 00:32:05.787 "superblock": false, 00:32:05.787 "num_base_bdevs": 2, 00:32:05.787 "num_base_bdevs_discovered": 2, 00:32:05.787 "num_base_bdevs_operational": 2, 00:32:05.787 "base_bdevs_list": [ 00:32:05.787 { 00:32:05.787 "name": "spare", 00:32:05.787 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:32:05.787 "is_configured": true, 00:32:05.787 "data_offset": 0, 00:32:05.787 "data_size": 65536 00:32:05.787 }, 00:32:05.787 { 00:32:05.787 "name": "BaseBdev2", 00:32:05.787 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:32:05.787 "is_configured": true, 00:32:05.787 "data_offset": 0, 00:32:05.787 "data_size": 65536 00:32:05.787 } 00:32:05.787 ] 00:32:05.787 }' 00:32:05.787 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:06.047 "name": "raid_bdev1", 00:32:06.047 "uuid": "f8a471c0-d0a1-478b-9baf-9e8657a1c665", 00:32:06.047 "strip_size_kb": 0, 00:32:06.047 "state": "online", 00:32:06.047 "raid_level": "raid1", 00:32:06.047 "superblock": false, 00:32:06.047 "num_base_bdevs": 2, 00:32:06.047 "num_base_bdevs_discovered": 2, 00:32:06.047 "num_base_bdevs_operational": 2, 00:32:06.047 "base_bdevs_list": [ 00:32:06.047 { 00:32:06.047 "name": "spare", 00:32:06.047 "uuid": "ce53e7ae-4dea-5eb1-b40a-43e587b5c5c6", 00:32:06.047 "is_configured": true, 00:32:06.047 "data_offset": 0, 00:32:06.047 "data_size": 65536 00:32:06.047 }, 00:32:06.047 { 00:32:06.047 "name": "BaseBdev2", 00:32:06.047 "uuid": "9da85de3-48cd-55ec-84c9-2b4642ceb0e8", 00:32:06.047 "is_configured": true, 00:32:06.047 "data_offset": 0, 00:32:06.047 "data_size": 65536 00:32:06.047 } 00:32:06.047 ] 00:32:06.047 }' 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:06.047 18:30:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:06.307 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:06.307 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.307 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:06.307 [2024-12-06 18:30:37.226511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:06.307 [2024-12-06 18:30:37.226690] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:06.568 00:32:06.568 Latency(us) 00:32:06.568 [2024-12-06T18:30:37.517Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.568 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:32:06.568 raid_bdev1 : 8.80 84.11 252.32 0.00 0.00 16117.74 302.68 114543.24 00:32:06.568 [2024-12-06T18:30:37.517Z] =================================================================================================================== 00:32:06.568 [2024-12-06T18:30:37.517Z] Total : 84.11 252.32 0.00 0.00 16117.74 302.68 114543.24 00:32:06.568 [2024-12-06 18:30:37.308801] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:06.568 [2024-12-06 18:30:37.309030] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:06.568 [2024-12-06 18:30:37.309169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:06.568 [2024-12-06 18:30:37.309287] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:06.568 { 00:32:06.568 "results": [ 00:32:06.568 { 00:32:06.568 "job": "raid_bdev1", 00:32:06.568 "core_mask": "0x1", 00:32:06.568 "workload": "randrw", 00:32:06.568 "percentage": 50, 00:32:06.568 "status": "finished", 00:32:06.568 "queue_depth": 2, 00:32:06.568 "io_size": 3145728, 00:32:06.568 "runtime": 8.798446, 00:32:06.568 "iops": 84.10576140377516, 00:32:06.568 "mibps": 252.3172842113255, 00:32:06.568 "io_failed": 0, 00:32:06.568 "io_timeout": 0, 00:32:06.568 "avg_latency_us": 16117.741104960382, 00:32:06.568 "min_latency_us": 302.67630522088353, 00:32:06.568 "max_latency_us": 114543.24176706828 00:32:06.568 } 00:32:06.568 ], 00:32:06.568 "core_count": 1 00:32:06.568 } 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:06.568 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:32:06.827 /dev/nbd0 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:06.827 1+0 records in 00:32:06.827 1+0 records out 00:32:06.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000398239 s, 10.3 MB/s 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:06.827 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:32:07.087 /dev/nbd1 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:07.087 1+0 records in 00:32:07.087 1+0 records out 00:32:07.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000365417 s, 11.2 MB/s 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:07.087 18:30:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:32:07.345 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:32:07.345 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:07.345 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:32:07.345 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:07.345 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:32:07.345 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.345 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:07.604 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76197 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76197 ']' 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76197 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76197 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:07.863 killing process with pid 76197 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76197' 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76197 00:32:07.863 Received shutdown signal, test time was about 10.148888 seconds 00:32:07.863 00:32:07.863 Latency(us) 00:32:07.863 [2024-12-06T18:30:38.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:07.863 [2024-12-06T18:30:38.812Z] =================================================================================================================== 00:32:07.863 [2024-12-06T18:30:38.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:07.863 [2024-12-06 18:30:38.634619] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:07.863 18:30:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76197 00:32:08.122 [2024-12-06 18:30:38.866170] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:32:09.501 00:32:09.501 real 0m13.339s 00:32:09.501 user 0m16.488s 00:32:09.501 sys 0m1.771s 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:32:09.501 ************************************ 00:32:09.501 END TEST raid_rebuild_test_io 00:32:09.501 ************************************ 00:32:09.501 18:30:40 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:32:09.501 18:30:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:32:09.501 18:30:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:09.501 18:30:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:09.501 ************************************ 00:32:09.501 START TEST raid_rebuild_test_sb_io 00:32:09.501 ************************************ 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76592 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76592 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:09.501 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76592 ']' 00:32:09.502 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.502 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.502 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.502 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.502 18:30:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:09.502 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:09.502 Zero copy mechanism will not be used. 00:32:09.502 [2024-12-06 18:30:40.249626] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:09.502 [2024-12-06 18:30:40.249788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76592 ] 00:32:09.502 [2024-12-06 18:30:40.422282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:09.760 [2024-12-06 18:30:40.539776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.018 [2024-12-06 18:30:40.727473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:10.018 [2024-12-06 18:30:40.727526] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.276 BaseBdev1_malloc 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.276 [2024-12-06 18:30:41.145861] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:10.276 [2024-12-06 18:30:41.145931] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.276 [2024-12-06 18:30:41.145953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:10.276 [2024-12-06 18:30:41.145968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.276 [2024-12-06 18:30:41.148327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.276 [2024-12-06 18:30:41.148375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:10.276 BaseBdev1 00:32:10.276 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.277 BaseBdev2_malloc 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.277 [2024-12-06 18:30:41.202631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:10.277 [2024-12-06 18:30:41.202699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.277 [2024-12-06 18:30:41.202724] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:10.277 [2024-12-06 18:30:41.202740] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.277 [2024-12-06 18:30:41.205046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.277 [2024-12-06 18:30:41.205093] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:10.277 BaseBdev2 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.277 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.535 spare_malloc 00:32:10.535 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.535 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:10.535 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.535 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.535 spare_delay 00:32:10.535 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.535 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:10.535 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.535 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.535 [2024-12-06 18:30:41.278100] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:10.536 [2024-12-06 18:30:41.278177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:10.536 [2024-12-06 18:30:41.278199] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:32:10.536 [2024-12-06 18:30:41.278220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:10.536 [2024-12-06 18:30:41.280588] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:10.536 [2024-12-06 18:30:41.280635] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:10.536 spare 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.536 [2024-12-06 18:30:41.290167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:10.536 [2024-12-06 18:30:41.292184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:10.536 [2024-12-06 18:30:41.292356] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:10.536 [2024-12-06 18:30:41.292374] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:10.536 [2024-12-06 18:30:41.292618] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:32:10.536 [2024-12-06 18:30:41.292773] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:10.536 [2024-12-06 18:30:41.292784] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:10.536 [2024-12-06 18:30:41.292934] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:10.536 "name": "raid_bdev1", 00:32:10.536 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:10.536 "strip_size_kb": 0, 00:32:10.536 "state": "online", 00:32:10.536 "raid_level": "raid1", 00:32:10.536 "superblock": true, 00:32:10.536 "num_base_bdevs": 2, 00:32:10.536 "num_base_bdevs_discovered": 2, 00:32:10.536 "num_base_bdevs_operational": 2, 00:32:10.536 "base_bdevs_list": [ 00:32:10.536 { 00:32:10.536 "name": "BaseBdev1", 00:32:10.536 "uuid": "bba55ef0-1421-5438-bb7d-d97b1e2ce7ff", 00:32:10.536 "is_configured": true, 00:32:10.536 "data_offset": 2048, 00:32:10.536 "data_size": 63488 00:32:10.536 }, 00:32:10.536 { 00:32:10.536 "name": "BaseBdev2", 00:32:10.536 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:10.536 "is_configured": true, 00:32:10.536 "data_offset": 2048, 00:32:10.536 "data_size": 63488 00:32:10.536 } 00:32:10.536 ] 00:32:10.536 }' 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:10.536 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.795 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:10.795 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:10.795 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.795 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:10.795 [2024-12-06 18:30:41.709818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:10.795 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:11.055 [2024-12-06 18:30:41.801403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:11.055 "name": "raid_bdev1", 00:32:11.055 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:11.055 "strip_size_kb": 0, 00:32:11.055 "state": "online", 00:32:11.055 "raid_level": "raid1", 00:32:11.055 "superblock": true, 00:32:11.055 "num_base_bdevs": 2, 00:32:11.055 "num_base_bdevs_discovered": 1, 00:32:11.055 "num_base_bdevs_operational": 1, 00:32:11.055 "base_bdevs_list": [ 00:32:11.055 { 00:32:11.055 "name": null, 00:32:11.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:11.055 "is_configured": false, 00:32:11.055 "data_offset": 0, 00:32:11.055 "data_size": 63488 00:32:11.055 }, 00:32:11.055 { 00:32:11.055 "name": "BaseBdev2", 00:32:11.055 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:11.055 "is_configured": true, 00:32:11.055 "data_offset": 2048, 00:32:11.055 "data_size": 63488 00:32:11.055 } 00:32:11.055 ] 00:32:11.055 }' 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:11.055 18:30:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:11.055 [2024-12-06 18:30:41.904622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:32:11.055 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:11.055 Zero copy mechanism will not be used. 00:32:11.055 Running I/O for 60 seconds... 00:32:11.314 18:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:11.314 18:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:11.314 18:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:11.314 [2024-12-06 18:30:42.253991] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:11.574 18:30:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:11.574 18:30:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:11.574 [2024-12-06 18:30:42.319940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:11.574 [2024-12-06 18:30:42.322074] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:11.574 [2024-12-06 18:30:42.435682] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:11.574 [2024-12-06 18:30:42.436265] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:11.834 [2024-12-06 18:30:42.644174] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:11.834 [2024-12-06 18:30:42.644509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:12.092 178.00 IOPS, 534.00 MiB/s [2024-12-06T18:30:43.041Z] [2024-12-06 18:30:42.991520] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:32:12.362 [2024-12-06 18:30:43.128226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:12.362 [2024-12-06 18:30:43.128575] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:12.362 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:12.362 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:12.362 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:12.362 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:12.362 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:12.632 "name": "raid_bdev1", 00:32:12.632 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:12.632 "strip_size_kb": 0, 00:32:12.632 "state": "online", 00:32:12.632 "raid_level": "raid1", 00:32:12.632 "superblock": true, 00:32:12.632 "num_base_bdevs": 2, 00:32:12.632 "num_base_bdevs_discovered": 2, 00:32:12.632 "num_base_bdevs_operational": 2, 00:32:12.632 "process": { 00:32:12.632 "type": "rebuild", 00:32:12.632 "target": "spare", 00:32:12.632 "progress": { 00:32:12.632 "blocks": 10240, 00:32:12.632 "percent": 16 00:32:12.632 } 00:32:12.632 }, 00:32:12.632 "base_bdevs_list": [ 00:32:12.632 { 00:32:12.632 "name": "spare", 00:32:12.632 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:12.632 "is_configured": true, 00:32:12.632 "data_offset": 2048, 00:32:12.632 "data_size": 63488 00:32:12.632 }, 00:32:12.632 { 00:32:12.632 "name": "BaseBdev2", 00:32:12.632 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:12.632 "is_configured": true, 00:32:12.632 "data_offset": 2048, 00:32:12.632 "data_size": 63488 00:32:12.632 } 00:32:12.632 ] 00:32:12.632 }' 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.632 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:12.632 [2024-12-06 18:30:43.458080] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:12.632 [2024-12-06 18:30:43.482833] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:32:12.892 [2024-12-06 18:30:43.590976] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:12.892 [2024-12-06 18:30:43.599604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:12.892 [2024-12-06 18:30:43.599660] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:12.892 [2024-12-06 18:30:43.599678] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:12.892 [2024-12-06 18:30:43.648652] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:12.892 "name": "raid_bdev1", 00:32:12.892 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:12.892 "strip_size_kb": 0, 00:32:12.892 "state": "online", 00:32:12.892 "raid_level": "raid1", 00:32:12.892 "superblock": true, 00:32:12.892 "num_base_bdevs": 2, 00:32:12.892 "num_base_bdevs_discovered": 1, 00:32:12.892 "num_base_bdevs_operational": 1, 00:32:12.892 "base_bdevs_list": [ 00:32:12.892 { 00:32:12.892 "name": null, 00:32:12.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:12.892 "is_configured": false, 00:32:12.892 "data_offset": 0, 00:32:12.892 "data_size": 63488 00:32:12.892 }, 00:32:12.892 { 00:32:12.892 "name": "BaseBdev2", 00:32:12.892 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:12.892 "is_configured": true, 00:32:12.892 "data_offset": 2048, 00:32:12.892 "data_size": 63488 00:32:12.892 } 00:32:12.892 ] 00:32:12.892 }' 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:12.892 18:30:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:13.151 151.50 IOPS, 454.50 MiB/s [2024-12-06T18:30:44.100Z] 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:13.151 "name": "raid_bdev1", 00:32:13.151 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:13.151 "strip_size_kb": 0, 00:32:13.151 "state": "online", 00:32:13.151 "raid_level": "raid1", 00:32:13.151 "superblock": true, 00:32:13.151 "num_base_bdevs": 2, 00:32:13.151 "num_base_bdevs_discovered": 1, 00:32:13.151 "num_base_bdevs_operational": 1, 00:32:13.151 "base_bdevs_list": [ 00:32:13.151 { 00:32:13.151 "name": null, 00:32:13.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:13.151 "is_configured": false, 00:32:13.151 "data_offset": 0, 00:32:13.151 "data_size": 63488 00:32:13.151 }, 00:32:13.151 { 00:32:13.151 "name": "BaseBdev2", 00:32:13.151 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:13.151 "is_configured": true, 00:32:13.151 "data_offset": 2048, 00:32:13.151 "data_size": 63488 00:32:13.151 } 00:32:13.151 ] 00:32:13.151 }' 00:32:13.151 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:13.410 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:13.410 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:13.410 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:13.410 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:13.410 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:13.410 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:13.410 [2024-12-06 18:30:44.190034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:13.410 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:13.410 18:30:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:13.410 [2024-12-06 18:30:44.253073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:32:13.410 [2024-12-06 18:30:44.255253] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:13.668 [2024-12-06 18:30:44.362251] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:13.668 [2024-12-06 18:30:44.362774] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:32:13.668 [2024-12-06 18:30:44.493788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:13.668 [2024-12-06 18:30:44.494089] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:32:13.927 [2024-12-06 18:30:44.840803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:32:14.186 159.33 IOPS, 478.00 MiB/s [2024-12-06T18:30:45.135Z] [2024-12-06 18:30:44.960985] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:14.186 [2024-12-06 18:30:44.961354] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:14.444 "name": "raid_bdev1", 00:32:14.444 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:14.444 "strip_size_kb": 0, 00:32:14.444 "state": "online", 00:32:14.444 "raid_level": "raid1", 00:32:14.444 "superblock": true, 00:32:14.444 "num_base_bdevs": 2, 00:32:14.444 "num_base_bdevs_discovered": 2, 00:32:14.444 "num_base_bdevs_operational": 2, 00:32:14.444 "process": { 00:32:14.444 "type": "rebuild", 00:32:14.444 "target": "spare", 00:32:14.444 "progress": { 00:32:14.444 "blocks": 12288, 00:32:14.444 "percent": 19 00:32:14.444 } 00:32:14.444 }, 00:32:14.444 "base_bdevs_list": [ 00:32:14.444 { 00:32:14.444 "name": "spare", 00:32:14.444 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:14.444 "is_configured": true, 00:32:14.444 "data_offset": 2048, 00:32:14.444 "data_size": 63488 00:32:14.444 }, 00:32:14.444 { 00:32:14.444 "name": "BaseBdev2", 00:32:14.444 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:14.444 "is_configured": true, 00:32:14.444 "data_offset": 2048, 00:32:14.444 "data_size": 63488 00:32:14.444 } 00:32:14.444 ] 00:32:14.444 }' 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:32:14.444 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=422 00:32:14.444 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:14.445 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:14.703 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:14.703 [2024-12-06 18:30:45.410004] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:32:14.703 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:14.703 "name": "raid_bdev1", 00:32:14.703 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:14.703 "strip_size_kb": 0, 00:32:14.703 "state": "online", 00:32:14.703 "raid_level": "raid1", 00:32:14.703 "superblock": true, 00:32:14.703 "num_base_bdevs": 2, 00:32:14.703 "num_base_bdevs_discovered": 2, 00:32:14.703 "num_base_bdevs_operational": 2, 00:32:14.703 "process": { 00:32:14.703 "type": "rebuild", 00:32:14.703 "target": "spare", 00:32:14.703 "progress": { 00:32:14.703 "blocks": 14336, 00:32:14.703 "percent": 22 00:32:14.703 } 00:32:14.703 }, 00:32:14.703 "base_bdevs_list": [ 00:32:14.703 { 00:32:14.703 "name": "spare", 00:32:14.703 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:14.703 "is_configured": true, 00:32:14.703 "data_offset": 2048, 00:32:14.703 "data_size": 63488 00:32:14.703 }, 00:32:14.703 { 00:32:14.703 "name": "BaseBdev2", 00:32:14.703 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:14.703 "is_configured": true, 00:32:14.703 "data_offset": 2048, 00:32:14.703 "data_size": 63488 00:32:14.703 } 00:32:14.703 ] 00:32:14.703 }' 00:32:14.703 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:14.703 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:14.703 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:14.703 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:14.703 18:30:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:14.961 [2024-12-06 18:30:45.759705] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:32:15.218 139.25 IOPS, 417.75 MiB/s [2024-12-06T18:30:46.167Z] [2024-12-06 18:30:45.977285] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:32:15.476 [2024-12-06 18:30:46.317540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:15.735 "name": "raid_bdev1", 00:32:15.735 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:15.735 "strip_size_kb": 0, 00:32:15.735 "state": "online", 00:32:15.735 "raid_level": "raid1", 00:32:15.735 "superblock": true, 00:32:15.735 "num_base_bdevs": 2, 00:32:15.735 "num_base_bdevs_discovered": 2, 00:32:15.735 "num_base_bdevs_operational": 2, 00:32:15.735 "process": { 00:32:15.735 "type": "rebuild", 00:32:15.735 "target": "spare", 00:32:15.735 "progress": { 00:32:15.735 "blocks": 28672, 00:32:15.735 "percent": 45 00:32:15.735 } 00:32:15.735 }, 00:32:15.735 "base_bdevs_list": [ 00:32:15.735 { 00:32:15.735 "name": "spare", 00:32:15.735 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:15.735 "is_configured": true, 00:32:15.735 "data_offset": 2048, 00:32:15.735 "data_size": 63488 00:32:15.735 }, 00:32:15.735 { 00:32:15.735 "name": "BaseBdev2", 00:32:15.735 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:15.735 "is_configured": true, 00:32:15.735 "data_offset": 2048, 00:32:15.735 "data_size": 63488 00:32:15.735 } 00:32:15.735 ] 00:32:15.735 }' 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:15.735 18:30:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:16.362 120.40 IOPS, 361.20 MiB/s [2024-12-06T18:30:47.311Z] [2024-12-06 18:30:46.983616] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:32:16.620 [2024-12-06 18:30:47.313992] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:32:16.620 [2024-12-06 18:30:47.314635] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:32:16.620 [2024-12-06 18:30:47.430886] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:16.878 "name": "raid_bdev1", 00:32:16.878 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:16.878 "strip_size_kb": 0, 00:32:16.878 "state": "online", 00:32:16.878 "raid_level": "raid1", 00:32:16.878 "superblock": true, 00:32:16.878 "num_base_bdevs": 2, 00:32:16.878 "num_base_bdevs_discovered": 2, 00:32:16.878 "num_base_bdevs_operational": 2, 00:32:16.878 "process": { 00:32:16.878 "type": "rebuild", 00:32:16.878 "target": "spare", 00:32:16.878 "progress": { 00:32:16.878 "blocks": 51200, 00:32:16.878 "percent": 80 00:32:16.878 } 00:32:16.878 }, 00:32:16.878 "base_bdevs_list": [ 00:32:16.878 { 00:32:16.878 "name": "spare", 00:32:16.878 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:16.878 "is_configured": true, 00:32:16.878 "data_offset": 2048, 00:32:16.878 "data_size": 63488 00:32:16.878 }, 00:32:16.878 { 00:32:16.878 "name": "BaseBdev2", 00:32:16.878 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:16.878 "is_configured": true, 00:32:16.878 "data_offset": 2048, 00:32:16.878 "data_size": 63488 00:32:16.878 } 00:32:16.878 ] 00:32:16.878 }' 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:16.878 [2024-12-06 18:30:47.769305] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:16.878 18:30:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:17.703 107.17 IOPS, 321.50 MiB/s [2024-12-06T18:30:48.652Z] [2024-12-06 18:30:48.422897] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:17.703 [2024-12-06 18:30:48.529559] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:17.703 [2024-12-06 18:30:48.531838] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:17.961 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:17.961 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:17.961 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:17.961 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:17.962 "name": "raid_bdev1", 00:32:17.962 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:17.962 "strip_size_kb": 0, 00:32:17.962 "state": "online", 00:32:17.962 "raid_level": "raid1", 00:32:17.962 "superblock": true, 00:32:17.962 "num_base_bdevs": 2, 00:32:17.962 "num_base_bdevs_discovered": 2, 00:32:17.962 "num_base_bdevs_operational": 2, 00:32:17.962 "base_bdevs_list": [ 00:32:17.962 { 00:32:17.962 "name": "spare", 00:32:17.962 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:17.962 "is_configured": true, 00:32:17.962 "data_offset": 2048, 00:32:17.962 "data_size": 63488 00:32:17.962 }, 00:32:17.962 { 00:32:17.962 "name": "BaseBdev2", 00:32:17.962 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:17.962 "is_configured": true, 00:32:17.962 "data_offset": 2048, 00:32:17.962 "data_size": 63488 00:32:17.962 } 00:32:17.962 ] 00:32:17.962 }' 00:32:17.962 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:18.220 96.29 IOPS, 288.86 MiB/s [2024-12-06T18:30:49.169Z] 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.220 18:30:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:18.220 "name": "raid_bdev1", 00:32:18.220 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:18.220 "strip_size_kb": 0, 00:32:18.220 "state": "online", 00:32:18.220 "raid_level": "raid1", 00:32:18.220 "superblock": true, 00:32:18.220 "num_base_bdevs": 2, 00:32:18.220 "num_base_bdevs_discovered": 2, 00:32:18.220 "num_base_bdevs_operational": 2, 00:32:18.220 "base_bdevs_list": [ 00:32:18.220 { 00:32:18.220 "name": "spare", 00:32:18.220 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:18.220 "is_configured": true, 00:32:18.220 "data_offset": 2048, 00:32:18.220 "data_size": 63488 00:32:18.220 }, 00:32:18.220 { 00:32:18.220 "name": "BaseBdev2", 00:32:18.220 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:18.220 "is_configured": true, 00:32:18.220 "data_offset": 2048, 00:32:18.220 "data_size": 63488 00:32:18.220 } 00:32:18.220 ] 00:32:18.220 }' 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:18.220 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:18.221 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.221 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.221 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:18.221 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:18.221 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.221 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:18.221 "name": "raid_bdev1", 00:32:18.221 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:18.221 "strip_size_kb": 0, 00:32:18.221 "state": "online", 00:32:18.221 "raid_level": "raid1", 00:32:18.221 "superblock": true, 00:32:18.221 "num_base_bdevs": 2, 00:32:18.221 "num_base_bdevs_discovered": 2, 00:32:18.221 "num_base_bdevs_operational": 2, 00:32:18.221 "base_bdevs_list": [ 00:32:18.221 { 00:32:18.221 "name": "spare", 00:32:18.221 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:18.221 "is_configured": true, 00:32:18.221 "data_offset": 2048, 00:32:18.221 "data_size": 63488 00:32:18.221 }, 00:32:18.221 { 00:32:18.221 "name": "BaseBdev2", 00:32:18.221 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:18.221 "is_configured": true, 00:32:18.221 "data_offset": 2048, 00:32:18.221 "data_size": 63488 00:32:18.221 } 00:32:18.221 ] 00:32:18.221 }' 00:32:18.221 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:18.221 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:18.850 [2024-12-06 18:30:49.522255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:18.850 [2024-12-06 18:30:49.522306] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:18.850 00:32:18.850 Latency(us) 00:32:18.850 [2024-12-06T18:30:49.799Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:18.850 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:32:18.850 raid_bdev1 : 7.72 91.46 274.37 0.00 0.00 15188.70 315.84 110332.09 00:32:18.850 [2024-12-06T18:30:49.799Z] =================================================================================================================== 00:32:18.850 [2024-12-06T18:30:49.799Z] Total : 91.46 274.37 0.00 0.00 15188.70 315.84 110332.09 00:32:18.850 [2024-12-06 18:30:49.636998] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:18.850 [2024-12-06 18:30:49.637076] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:18.850 [2024-12-06 18:30:49.637182] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:18.850 [2024-12-06 18:30:49.637196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:18.850 { 00:32:18.850 "results": [ 00:32:18.850 { 00:32:18.850 "job": "raid_bdev1", 00:32:18.850 "core_mask": "0x1", 00:32:18.850 "workload": "randrw", 00:32:18.850 "percentage": 50, 00:32:18.850 "status": "finished", 00:32:18.850 "queue_depth": 2, 00:32:18.850 "io_size": 3145728, 00:32:18.850 "runtime": 7.719638, 00:32:18.850 "iops": 91.45506563908826, 00:32:18.850 "mibps": 274.36519691726477, 00:32:18.850 "io_failed": 0, 00:32:18.850 "io_timeout": 0, 00:32:18.850 "avg_latency_us": 15188.698408364335, 00:32:18.850 "min_latency_us": 315.8361445783133, 00:32:18.850 "max_latency_us": 110332.09317269076 00:32:18.850 } 00:32:18.850 ], 00:32:18.850 "core_count": 1 00:32:18.850 } 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:32:18.850 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:18.851 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:32:19.109 /dev/nbd0 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:19.109 1+0 records in 00:32:19.109 1+0 records out 00:32:19.109 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536173 s, 7.6 MB/s 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:19.109 18:30:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:32:19.368 /dev/nbd1 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:19.368 1+0 records in 00:32:19.368 1+0 records out 00:32:19.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437268 s, 9.4 MB/s 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:19.368 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:32:19.627 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:32:19.627 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:19.627 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:32:19.627 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:19.627 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:32:19.627 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:19.627 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:19.886 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.145 [2024-12-06 18:30:50.973444] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:20.145 [2024-12-06 18:30:50.973518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:20.145 [2024-12-06 18:30:50.973547] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:32:20.145 [2024-12-06 18:30:50.973560] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:20.145 [2024-12-06 18:30:50.976135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:20.145 [2024-12-06 18:30:50.976199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:20.145 [2024-12-06 18:30:50.976309] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:20.145 [2024-12-06 18:30:50.976363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:20.145 [2024-12-06 18:30:50.976523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:20.145 spare 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.145 18:30:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.145 [2024-12-06 18:30:51.076466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:32:20.145 [2024-12-06 18:30:51.076532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:20.145 [2024-12-06 18:30:51.076892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:32:20.145 [2024-12-06 18:30:51.077084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:32:20.145 [2024-12-06 18:30:51.077094] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:32:20.145 [2024-12-06 18:30:51.077355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.145 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.146 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.404 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.404 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:20.404 "name": "raid_bdev1", 00:32:20.404 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:20.404 "strip_size_kb": 0, 00:32:20.404 "state": "online", 00:32:20.404 "raid_level": "raid1", 00:32:20.404 "superblock": true, 00:32:20.404 "num_base_bdevs": 2, 00:32:20.404 "num_base_bdevs_discovered": 2, 00:32:20.404 "num_base_bdevs_operational": 2, 00:32:20.404 "base_bdevs_list": [ 00:32:20.404 { 00:32:20.404 "name": "spare", 00:32:20.404 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:20.404 "is_configured": true, 00:32:20.404 "data_offset": 2048, 00:32:20.404 "data_size": 63488 00:32:20.404 }, 00:32:20.404 { 00:32:20.404 "name": "BaseBdev2", 00:32:20.404 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:20.404 "is_configured": true, 00:32:20.404 "data_offset": 2048, 00:32:20.404 "data_size": 63488 00:32:20.404 } 00:32:20.404 ] 00:32:20.404 }' 00:32:20.404 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:20.404 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:20.663 "name": "raid_bdev1", 00:32:20.663 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:20.663 "strip_size_kb": 0, 00:32:20.663 "state": "online", 00:32:20.663 "raid_level": "raid1", 00:32:20.663 "superblock": true, 00:32:20.663 "num_base_bdevs": 2, 00:32:20.663 "num_base_bdevs_discovered": 2, 00:32:20.663 "num_base_bdevs_operational": 2, 00:32:20.663 "base_bdevs_list": [ 00:32:20.663 { 00:32:20.663 "name": "spare", 00:32:20.663 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:20.663 "is_configured": true, 00:32:20.663 "data_offset": 2048, 00:32:20.663 "data_size": 63488 00:32:20.663 }, 00:32:20.663 { 00:32:20.663 "name": "BaseBdev2", 00:32:20.663 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:20.663 "is_configured": true, 00:32:20.663 "data_offset": 2048, 00:32:20.663 "data_size": 63488 00:32:20.663 } 00:32:20.663 ] 00:32:20.663 }' 00:32:20.663 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.921 [2024-12-06 18:30:51.729311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:20.921 "name": "raid_bdev1", 00:32:20.921 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:20.921 "strip_size_kb": 0, 00:32:20.921 "state": "online", 00:32:20.921 "raid_level": "raid1", 00:32:20.921 "superblock": true, 00:32:20.921 "num_base_bdevs": 2, 00:32:20.921 "num_base_bdevs_discovered": 1, 00:32:20.921 "num_base_bdevs_operational": 1, 00:32:20.921 "base_bdevs_list": [ 00:32:20.921 { 00:32:20.921 "name": null, 00:32:20.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:20.921 "is_configured": false, 00:32:20.921 "data_offset": 0, 00:32:20.921 "data_size": 63488 00:32:20.921 }, 00:32:20.921 { 00:32:20.921 "name": "BaseBdev2", 00:32:20.921 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:20.921 "is_configured": true, 00:32:20.921 "data_offset": 2048, 00:32:20.921 "data_size": 63488 00:32:20.921 } 00:32:20.921 ] 00:32:20.921 }' 00:32:20.921 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:20.922 18:30:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:21.490 18:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:21.490 18:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:21.490 18:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:21.490 [2024-12-06 18:30:52.165335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:21.491 [2024-12-06 18:30:52.165546] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:21.491 [2024-12-06 18:30:52.165567] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:21.491 [2024-12-06 18:30:52.165611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:21.491 [2024-12-06 18:30:52.182677] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:32:21.491 18:30:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:21.491 18:30:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:32:21.491 [2024-12-06 18:30:52.184788] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:22.428 "name": "raid_bdev1", 00:32:22.428 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:22.428 "strip_size_kb": 0, 00:32:22.428 "state": "online", 00:32:22.428 "raid_level": "raid1", 00:32:22.428 "superblock": true, 00:32:22.428 "num_base_bdevs": 2, 00:32:22.428 "num_base_bdevs_discovered": 2, 00:32:22.428 "num_base_bdevs_operational": 2, 00:32:22.428 "process": { 00:32:22.428 "type": "rebuild", 00:32:22.428 "target": "spare", 00:32:22.428 "progress": { 00:32:22.428 "blocks": 20480, 00:32:22.428 "percent": 32 00:32:22.428 } 00:32:22.428 }, 00:32:22.428 "base_bdevs_list": [ 00:32:22.428 { 00:32:22.428 "name": "spare", 00:32:22.428 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:22.428 "is_configured": true, 00:32:22.428 "data_offset": 2048, 00:32:22.428 "data_size": 63488 00:32:22.428 }, 00:32:22.428 { 00:32:22.428 "name": "BaseBdev2", 00:32:22.428 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:22.428 "is_configured": true, 00:32:22.428 "data_offset": 2048, 00:32:22.428 "data_size": 63488 00:32:22.428 } 00:32:22.428 ] 00:32:22.428 }' 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.428 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:22.428 [2024-12-06 18:30:53.333506] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:22.687 [2024-12-06 18:30:53.390444] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:22.687 [2024-12-06 18:30:53.390538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:22.687 [2024-12-06 18:30:53.390555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:22.687 [2024-12-06 18:30:53.390568] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.687 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:22.687 "name": "raid_bdev1", 00:32:22.687 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:22.687 "strip_size_kb": 0, 00:32:22.687 "state": "online", 00:32:22.687 "raid_level": "raid1", 00:32:22.687 "superblock": true, 00:32:22.687 "num_base_bdevs": 2, 00:32:22.687 "num_base_bdevs_discovered": 1, 00:32:22.687 "num_base_bdevs_operational": 1, 00:32:22.687 "base_bdevs_list": [ 00:32:22.687 { 00:32:22.687 "name": null, 00:32:22.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:22.687 "is_configured": false, 00:32:22.687 "data_offset": 0, 00:32:22.687 "data_size": 63488 00:32:22.687 }, 00:32:22.687 { 00:32:22.687 "name": "BaseBdev2", 00:32:22.687 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:22.687 "is_configured": true, 00:32:22.687 "data_offset": 2048, 00:32:22.687 "data_size": 63488 00:32:22.687 } 00:32:22.687 ] 00:32:22.688 }' 00:32:22.688 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:22.688 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:22.947 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:22.947 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:22.947 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:22.947 [2024-12-06 18:30:53.830483] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:22.947 [2024-12-06 18:30:53.830578] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:22.947 [2024-12-06 18:30:53.830604] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:32:22.947 [2024-12-06 18:30:53.830621] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:22.947 [2024-12-06 18:30:53.831127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:22.947 [2024-12-06 18:30:53.831392] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:22.947 [2024-12-06 18:30:53.831560] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:32:22.947 [2024-12-06 18:30:53.831726] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:32:22.947 [2024-12-06 18:30:53.831874] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:32:22.947 [2024-12-06 18:30:53.831955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:22.947 [2024-12-06 18:30:53.848807] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:32:22.947 spare 00:32:22.947 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:22.947 18:30:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:32:22.947 [2024-12-06 18:30:53.851258] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:24.322 "name": "raid_bdev1", 00:32:24.322 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:24.322 "strip_size_kb": 0, 00:32:24.322 "state": "online", 00:32:24.322 "raid_level": "raid1", 00:32:24.322 "superblock": true, 00:32:24.322 "num_base_bdevs": 2, 00:32:24.322 "num_base_bdevs_discovered": 2, 00:32:24.322 "num_base_bdevs_operational": 2, 00:32:24.322 "process": { 00:32:24.322 "type": "rebuild", 00:32:24.322 "target": "spare", 00:32:24.322 "progress": { 00:32:24.322 "blocks": 20480, 00:32:24.322 "percent": 32 00:32:24.322 } 00:32:24.322 }, 00:32:24.322 "base_bdevs_list": [ 00:32:24.322 { 00:32:24.322 "name": "spare", 00:32:24.322 "uuid": "6ac256e6-dd7f-53cc-aac8-5bc784815a63", 00:32:24.322 "is_configured": true, 00:32:24.322 "data_offset": 2048, 00:32:24.322 "data_size": 63488 00:32:24.322 }, 00:32:24.322 { 00:32:24.322 "name": "BaseBdev2", 00:32:24.322 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:24.322 "is_configured": true, 00:32:24.322 "data_offset": 2048, 00:32:24.322 "data_size": 63488 00:32:24.322 } 00:32:24.322 ] 00:32:24.322 }' 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:24.322 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:24.323 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:24.323 18:30:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:24.323 [2024-12-06 18:30:55.006869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:24.323 [2024-12-06 18:30:55.056990] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:24.323 [2024-12-06 18:30:55.057329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:24.323 [2024-12-06 18:30:55.057531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:24.323 [2024-12-06 18:30:55.057574] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:24.323 "name": "raid_bdev1", 00:32:24.323 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:24.323 "strip_size_kb": 0, 00:32:24.323 "state": "online", 00:32:24.323 "raid_level": "raid1", 00:32:24.323 "superblock": true, 00:32:24.323 "num_base_bdevs": 2, 00:32:24.323 "num_base_bdevs_discovered": 1, 00:32:24.323 "num_base_bdevs_operational": 1, 00:32:24.323 "base_bdevs_list": [ 00:32:24.323 { 00:32:24.323 "name": null, 00:32:24.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.323 "is_configured": false, 00:32:24.323 "data_offset": 0, 00:32:24.323 "data_size": 63488 00:32:24.323 }, 00:32:24.323 { 00:32:24.323 "name": "BaseBdev2", 00:32:24.323 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:24.323 "is_configured": true, 00:32:24.323 "data_offset": 2048, 00:32:24.323 "data_size": 63488 00:32:24.323 } 00:32:24.323 ] 00:32:24.323 }' 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:24.323 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:24.889 "name": "raid_bdev1", 00:32:24.889 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:24.889 "strip_size_kb": 0, 00:32:24.889 "state": "online", 00:32:24.889 "raid_level": "raid1", 00:32:24.889 "superblock": true, 00:32:24.889 "num_base_bdevs": 2, 00:32:24.889 "num_base_bdevs_discovered": 1, 00:32:24.889 "num_base_bdevs_operational": 1, 00:32:24.889 "base_bdevs_list": [ 00:32:24.889 { 00:32:24.889 "name": null, 00:32:24.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:24.889 "is_configured": false, 00:32:24.889 "data_offset": 0, 00:32:24.889 "data_size": 63488 00:32:24.889 }, 00:32:24.889 { 00:32:24.889 "name": "BaseBdev2", 00:32:24.889 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:24.889 "is_configured": true, 00:32:24.889 "data_offset": 2048, 00:32:24.889 "data_size": 63488 00:32:24.889 } 00:32:24.889 ] 00:32:24.889 }' 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:24.889 [2024-12-06 18:30:55.729290] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:24.889 [2024-12-06 18:30:55.729374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:24.889 [2024-12-06 18:30:55.729408] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:32:24.889 [2024-12-06 18:30:55.729423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:24.889 [2024-12-06 18:30:55.729903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:24.889 [2024-12-06 18:30:55.729926] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:24.889 [2024-12-06 18:30:55.730018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:32:24.889 [2024-12-06 18:30:55.730034] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:24.889 [2024-12-06 18:30:55.730050] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:24.889 [2024-12-06 18:30:55.730063] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:32:24.889 BaseBdev1 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:24.889 18:30:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:25.842 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.145 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:26.145 "name": "raid_bdev1", 00:32:26.145 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:26.145 "strip_size_kb": 0, 00:32:26.145 "state": "online", 00:32:26.145 "raid_level": "raid1", 00:32:26.145 "superblock": true, 00:32:26.145 "num_base_bdevs": 2, 00:32:26.145 "num_base_bdevs_discovered": 1, 00:32:26.145 "num_base_bdevs_operational": 1, 00:32:26.145 "base_bdevs_list": [ 00:32:26.145 { 00:32:26.145 "name": null, 00:32:26.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.145 "is_configured": false, 00:32:26.145 "data_offset": 0, 00:32:26.145 "data_size": 63488 00:32:26.145 }, 00:32:26.145 { 00:32:26.145 "name": "BaseBdev2", 00:32:26.145 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:26.145 "is_configured": true, 00:32:26.145 "data_offset": 2048, 00:32:26.145 "data_size": 63488 00:32:26.145 } 00:32:26.145 ] 00:32:26.145 }' 00:32:26.145 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:26.145 18:30:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:26.405 "name": "raid_bdev1", 00:32:26.405 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:26.405 "strip_size_kb": 0, 00:32:26.405 "state": "online", 00:32:26.405 "raid_level": "raid1", 00:32:26.405 "superblock": true, 00:32:26.405 "num_base_bdevs": 2, 00:32:26.405 "num_base_bdevs_discovered": 1, 00:32:26.405 "num_base_bdevs_operational": 1, 00:32:26.405 "base_bdevs_list": [ 00:32:26.405 { 00:32:26.405 "name": null, 00:32:26.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:26.405 "is_configured": false, 00:32:26.405 "data_offset": 0, 00:32:26.405 "data_size": 63488 00:32:26.405 }, 00:32:26.405 { 00:32:26.405 "name": "BaseBdev2", 00:32:26.405 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:26.405 "is_configured": true, 00:32:26.405 "data_offset": 2048, 00:32:26.405 "data_size": 63488 00:32:26.405 } 00:32:26.405 ] 00:32:26.405 }' 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:26.405 [2024-12-06 18:30:57.290427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:26.405 [2024-12-06 18:30:57.290677] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:32:26.405 [2024-12-06 18:30:57.290704] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:32:26.405 request: 00:32:26.405 { 00:32:26.405 "base_bdev": "BaseBdev1", 00:32:26.405 "raid_bdev": "raid_bdev1", 00:32:26.405 "method": "bdev_raid_add_base_bdev", 00:32:26.405 "req_id": 1 00:32:26.405 } 00:32:26.405 Got JSON-RPC error response 00:32:26.405 response: 00:32:26.405 { 00:32:26.405 "code": -22, 00:32:26.405 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:32:26.405 } 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:26.405 18:30:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:27.785 "name": "raid_bdev1", 00:32:27.785 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:27.785 "strip_size_kb": 0, 00:32:27.785 "state": "online", 00:32:27.785 "raid_level": "raid1", 00:32:27.785 "superblock": true, 00:32:27.785 "num_base_bdevs": 2, 00:32:27.785 "num_base_bdevs_discovered": 1, 00:32:27.785 "num_base_bdevs_operational": 1, 00:32:27.785 "base_bdevs_list": [ 00:32:27.785 { 00:32:27.785 "name": null, 00:32:27.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:27.785 "is_configured": false, 00:32:27.785 "data_offset": 0, 00:32:27.785 "data_size": 63488 00:32:27.785 }, 00:32:27.785 { 00:32:27.785 "name": "BaseBdev2", 00:32:27.785 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:27.785 "is_configured": true, 00:32:27.785 "data_offset": 2048, 00:32:27.785 "data_size": 63488 00:32:27.785 } 00:32:27.785 ] 00:32:27.785 }' 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:27.785 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:28.045 "name": "raid_bdev1", 00:32:28.045 "uuid": "4f2abe3f-a4fe-4c4a-8610-e20229d55ba8", 00:32:28.045 "strip_size_kb": 0, 00:32:28.045 "state": "online", 00:32:28.045 "raid_level": "raid1", 00:32:28.045 "superblock": true, 00:32:28.045 "num_base_bdevs": 2, 00:32:28.045 "num_base_bdevs_discovered": 1, 00:32:28.045 "num_base_bdevs_operational": 1, 00:32:28.045 "base_bdevs_list": [ 00:32:28.045 { 00:32:28.045 "name": null, 00:32:28.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:28.045 "is_configured": false, 00:32:28.045 "data_offset": 0, 00:32:28.045 "data_size": 63488 00:32:28.045 }, 00:32:28.045 { 00:32:28.045 "name": "BaseBdev2", 00:32:28.045 "uuid": "03f98217-103d-59c0-ba24-ef0e4ce47ddc", 00:32:28.045 "is_configured": true, 00:32:28.045 "data_offset": 2048, 00:32:28.045 "data_size": 63488 00:32:28.045 } 00:32:28.045 ] 00:32:28.045 }' 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76592 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76592 ']' 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76592 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76592 00:32:28.045 killing process with pid 76592 00:32:28.045 Received shutdown signal, test time was about 17.016980 seconds 00:32:28.045 00:32:28.045 Latency(us) 00:32:28.045 [2024-12-06T18:30:58.994Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:28.045 [2024-12-06T18:30:58.994Z] =================================================================================================================== 00:32:28.045 [2024-12-06T18:30:58.994Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76592' 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76592 00:32:28.045 [2024-12-06 18:30:58.896398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:28.045 18:30:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76592 00:32:28.045 [2024-12-06 18:30:58.896591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:28.045 [2024-12-06 18:30:58.896663] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:28.045 [2024-12-06 18:30:58.896684] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:32:28.304 [2024-12-06 18:30:59.155913] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:29.682 18:31:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:32:29.682 00:32:29.682 real 0m20.335s 00:32:29.682 user 0m26.224s 00:32:29.682 sys 0m2.600s 00:32:29.682 18:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.682 18:31:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:32:29.682 ************************************ 00:32:29.682 END TEST raid_rebuild_test_sb_io 00:32:29.682 ************************************ 00:32:29.682 18:31:00 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:32:29.682 18:31:00 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:32:29.682 18:31:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:32:29.697 18:31:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:29.697 18:31:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:29.697 ************************************ 00:32:29.697 START TEST raid_rebuild_test 00:32:29.697 ************************************ 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77276 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77276 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77276 ']' 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.697 18:31:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:29.957 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:29.957 Zero copy mechanism will not be used. 00:32:29.957 [2024-12-06 18:31:00.677073] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:29.957 [2024-12-06 18:31:00.677251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77276 ] 00:32:29.957 [2024-12-06 18:31:00.864882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.217 [2024-12-06 18:31:01.020022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.476 [2024-12-06 18:31:01.285584] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:30.476 [2024-12-06 18:31:01.285664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.735 BaseBdev1_malloc 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.735 [2024-12-06 18:31:01.596424] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:30.735 [2024-12-06 18:31:01.596518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.735 [2024-12-06 18:31:01.596550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:30.735 [2024-12-06 18:31:01.596567] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.735 [2024-12-06 18:31:01.599435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.735 [2024-12-06 18:31:01.599505] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:30.735 BaseBdev1 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.735 BaseBdev2_malloc 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.735 [2024-12-06 18:31:01.656873] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:30.735 [2024-12-06 18:31:01.656985] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.735 [2024-12-06 18:31:01.657023] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:30.735 [2024-12-06 18:31:01.657040] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.735 [2024-12-06 18:31:01.660045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.735 [2024-12-06 18:31:01.660112] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:30.735 BaseBdev2 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.735 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 BaseBdev3_malloc 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 [2024-12-06 18:31:01.734009] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:30.995 [2024-12-06 18:31:01.734124] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.995 [2024-12-06 18:31:01.734170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:30.995 [2024-12-06 18:31:01.734188] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.995 [2024-12-06 18:31:01.737174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.995 [2024-12-06 18:31:01.737236] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:30.995 BaseBdev3 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 BaseBdev4_malloc 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 [2024-12-06 18:31:01.798204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:30.995 [2024-12-06 18:31:01.798298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.995 [2024-12-06 18:31:01.798330] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:30.995 [2024-12-06 18:31:01.798356] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.995 [2024-12-06 18:31:01.801264] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.995 [2024-12-06 18:31:01.801320] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:30.995 BaseBdev4 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 spare_malloc 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 spare_delay 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 [2024-12-06 18:31:01.874239] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:30.995 [2024-12-06 18:31:01.874339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:30.995 [2024-12-06 18:31:01.874378] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:30.995 [2024-12-06 18:31:01.874394] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:30.995 [2024-12-06 18:31:01.877283] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:30.995 [2024-12-06 18:31:01.877334] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:30.995 spare 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 [2024-12-06 18:31:01.886400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:30.995 [2024-12-06 18:31:01.888897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:30.995 [2024-12-06 18:31:01.888976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:30.995 [2024-12-06 18:31:01.889033] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:30.995 [2024-12-06 18:31:01.889138] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:30.995 [2024-12-06 18:31:01.889172] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:32:30.995 [2024-12-06 18:31:01.889532] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:30.995 [2024-12-06 18:31:01.889744] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:30.995 [2024-12-06 18:31:01.889760] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:30.995 [2024-12-06 18:31:01.889961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:30.995 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.254 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:31.254 "name": "raid_bdev1", 00:32:31.254 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:31.254 "strip_size_kb": 0, 00:32:31.254 "state": "online", 00:32:31.254 "raid_level": "raid1", 00:32:31.254 "superblock": false, 00:32:31.254 "num_base_bdevs": 4, 00:32:31.254 "num_base_bdevs_discovered": 4, 00:32:31.254 "num_base_bdevs_operational": 4, 00:32:31.254 "base_bdevs_list": [ 00:32:31.254 { 00:32:31.254 "name": "BaseBdev1", 00:32:31.254 "uuid": "5d97c5f7-94ad-5312-90b2-0d074037b137", 00:32:31.254 "is_configured": true, 00:32:31.254 "data_offset": 0, 00:32:31.254 "data_size": 65536 00:32:31.254 }, 00:32:31.254 { 00:32:31.254 "name": "BaseBdev2", 00:32:31.254 "uuid": "1446ebbb-de75-585d-8bef-c1151c85ef8e", 00:32:31.254 "is_configured": true, 00:32:31.254 "data_offset": 0, 00:32:31.254 "data_size": 65536 00:32:31.254 }, 00:32:31.254 { 00:32:31.254 "name": "BaseBdev3", 00:32:31.254 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:31.254 "is_configured": true, 00:32:31.254 "data_offset": 0, 00:32:31.254 "data_size": 65536 00:32:31.254 }, 00:32:31.254 { 00:32:31.254 "name": "BaseBdev4", 00:32:31.254 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:31.254 "is_configured": true, 00:32:31.254 "data_offset": 0, 00:32:31.254 "data_size": 65536 00:32:31.254 } 00:32:31.254 ] 00:32:31.254 }' 00:32:31.254 18:31:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:31.254 18:31:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:31.513 [2024-12-06 18:31:02.306745] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:31.513 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:31.772 [2024-12-06 18:31:02.586457] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:31.772 /dev/nbd0 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:31.772 1+0 records in 00:32:31.772 1+0 records out 00:32:31.772 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482272 s, 8.5 MB/s 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:32:31.772 18:31:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:32:38.340 65536+0 records in 00:32:38.340 65536+0 records out 00:32:38.340 33554432 bytes (34 MB, 32 MiB) copied, 6.48402 s, 5.2 MB/s 00:32:38.340 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:38.340 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:38.340 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:38.340 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:38.340 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:38.340 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:38.340 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:38.599 [2024-12-06 18:31:09.357990] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:38.599 [2024-12-06 18:31:09.402099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:38.599 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:38.600 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.600 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:38.600 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.600 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:38.600 "name": "raid_bdev1", 00:32:38.600 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:38.600 "strip_size_kb": 0, 00:32:38.600 "state": "online", 00:32:38.600 "raid_level": "raid1", 00:32:38.600 "superblock": false, 00:32:38.600 "num_base_bdevs": 4, 00:32:38.600 "num_base_bdevs_discovered": 3, 00:32:38.600 "num_base_bdevs_operational": 3, 00:32:38.600 "base_bdevs_list": [ 00:32:38.600 { 00:32:38.600 "name": null, 00:32:38.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:38.600 "is_configured": false, 00:32:38.600 "data_offset": 0, 00:32:38.600 "data_size": 65536 00:32:38.600 }, 00:32:38.600 { 00:32:38.600 "name": "BaseBdev2", 00:32:38.600 "uuid": "1446ebbb-de75-585d-8bef-c1151c85ef8e", 00:32:38.600 "is_configured": true, 00:32:38.600 "data_offset": 0, 00:32:38.600 "data_size": 65536 00:32:38.600 }, 00:32:38.600 { 00:32:38.600 "name": "BaseBdev3", 00:32:38.600 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:38.600 "is_configured": true, 00:32:38.600 "data_offset": 0, 00:32:38.600 "data_size": 65536 00:32:38.600 }, 00:32:38.600 { 00:32:38.600 "name": "BaseBdev4", 00:32:38.600 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:38.600 "is_configured": true, 00:32:38.600 "data_offset": 0, 00:32:38.600 "data_size": 65536 00:32:38.600 } 00:32:38.600 ] 00:32:38.600 }' 00:32:38.600 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:38.600 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.237 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:39.237 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.237 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:39.237 [2024-12-06 18:31:09.849439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:39.237 [2024-12-06 18:31:09.866358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:32:39.237 18:31:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.237 18:31:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:39.237 [2024-12-06 18:31:09.868955] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:40.200 "name": "raid_bdev1", 00:32:40.200 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:40.200 "strip_size_kb": 0, 00:32:40.200 "state": "online", 00:32:40.200 "raid_level": "raid1", 00:32:40.200 "superblock": false, 00:32:40.200 "num_base_bdevs": 4, 00:32:40.200 "num_base_bdevs_discovered": 4, 00:32:40.200 "num_base_bdevs_operational": 4, 00:32:40.200 "process": { 00:32:40.200 "type": "rebuild", 00:32:40.200 "target": "spare", 00:32:40.200 "progress": { 00:32:40.200 "blocks": 20480, 00:32:40.200 "percent": 31 00:32:40.200 } 00:32:40.200 }, 00:32:40.200 "base_bdevs_list": [ 00:32:40.200 { 00:32:40.200 "name": "spare", 00:32:40.200 "uuid": "ade674f1-9aa7-5c20-a549-f64c76f5fe49", 00:32:40.200 "is_configured": true, 00:32:40.200 "data_offset": 0, 00:32:40.200 "data_size": 65536 00:32:40.200 }, 00:32:40.200 { 00:32:40.200 "name": "BaseBdev2", 00:32:40.200 "uuid": "1446ebbb-de75-585d-8bef-c1151c85ef8e", 00:32:40.200 "is_configured": true, 00:32:40.200 "data_offset": 0, 00:32:40.200 "data_size": 65536 00:32:40.200 }, 00:32:40.200 { 00:32:40.200 "name": "BaseBdev3", 00:32:40.200 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:40.200 "is_configured": true, 00:32:40.200 "data_offset": 0, 00:32:40.200 "data_size": 65536 00:32:40.200 }, 00:32:40.200 { 00:32:40.200 "name": "BaseBdev4", 00:32:40.200 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:40.200 "is_configured": true, 00:32:40.200 "data_offset": 0, 00:32:40.200 "data_size": 65536 00:32:40.200 } 00:32:40.200 ] 00:32:40.200 }' 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:40.200 18:31:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:40.200 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.201 [2024-12-06 18:31:11.021393] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:40.201 [2024-12-06 18:31:11.080260] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:40.201 [2024-12-06 18:31:11.080369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:40.201 [2024-12-06 18:31:11.080391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:40.201 [2024-12-06 18:31:11.080405] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.201 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.459 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:40.459 "name": "raid_bdev1", 00:32:40.459 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:40.459 "strip_size_kb": 0, 00:32:40.459 "state": "online", 00:32:40.459 "raid_level": "raid1", 00:32:40.459 "superblock": false, 00:32:40.459 "num_base_bdevs": 4, 00:32:40.459 "num_base_bdevs_discovered": 3, 00:32:40.459 "num_base_bdevs_operational": 3, 00:32:40.459 "base_bdevs_list": [ 00:32:40.459 { 00:32:40.459 "name": null, 00:32:40.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.459 "is_configured": false, 00:32:40.459 "data_offset": 0, 00:32:40.459 "data_size": 65536 00:32:40.459 }, 00:32:40.459 { 00:32:40.459 "name": "BaseBdev2", 00:32:40.459 "uuid": "1446ebbb-de75-585d-8bef-c1151c85ef8e", 00:32:40.459 "is_configured": true, 00:32:40.459 "data_offset": 0, 00:32:40.459 "data_size": 65536 00:32:40.459 }, 00:32:40.459 { 00:32:40.459 "name": "BaseBdev3", 00:32:40.459 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:40.459 "is_configured": true, 00:32:40.459 "data_offset": 0, 00:32:40.459 "data_size": 65536 00:32:40.459 }, 00:32:40.459 { 00:32:40.459 "name": "BaseBdev4", 00:32:40.459 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:40.459 "is_configured": true, 00:32:40.459 "data_offset": 0, 00:32:40.460 "data_size": 65536 00:32:40.460 } 00:32:40.460 ] 00:32:40.460 }' 00:32:40.460 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:40.460 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:40.717 "name": "raid_bdev1", 00:32:40.717 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:40.717 "strip_size_kb": 0, 00:32:40.717 "state": "online", 00:32:40.717 "raid_level": "raid1", 00:32:40.717 "superblock": false, 00:32:40.717 "num_base_bdevs": 4, 00:32:40.717 "num_base_bdevs_discovered": 3, 00:32:40.717 "num_base_bdevs_operational": 3, 00:32:40.717 "base_bdevs_list": [ 00:32:40.717 { 00:32:40.717 "name": null, 00:32:40.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:40.717 "is_configured": false, 00:32:40.717 "data_offset": 0, 00:32:40.717 "data_size": 65536 00:32:40.717 }, 00:32:40.717 { 00:32:40.717 "name": "BaseBdev2", 00:32:40.717 "uuid": "1446ebbb-de75-585d-8bef-c1151c85ef8e", 00:32:40.717 "is_configured": true, 00:32:40.717 "data_offset": 0, 00:32:40.717 "data_size": 65536 00:32:40.717 }, 00:32:40.717 { 00:32:40.717 "name": "BaseBdev3", 00:32:40.717 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:40.717 "is_configured": true, 00:32:40.717 "data_offset": 0, 00:32:40.717 "data_size": 65536 00:32:40.717 }, 00:32:40.717 { 00:32:40.717 "name": "BaseBdev4", 00:32:40.717 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:40.717 "is_configured": true, 00:32:40.717 "data_offset": 0, 00:32:40.717 "data_size": 65536 00:32:40.717 } 00:32:40.717 ] 00:32:40.717 }' 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:40.717 [2024-12-06 18:31:11.633365] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:40.717 [2024-12-06 18:31:11.649084] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.717 18:31:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:40.717 [2024-12-06 18:31:11.651674] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:42.094 "name": "raid_bdev1", 00:32:42.094 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:42.094 "strip_size_kb": 0, 00:32:42.094 "state": "online", 00:32:42.094 "raid_level": "raid1", 00:32:42.094 "superblock": false, 00:32:42.094 "num_base_bdevs": 4, 00:32:42.094 "num_base_bdevs_discovered": 4, 00:32:42.094 "num_base_bdevs_operational": 4, 00:32:42.094 "process": { 00:32:42.094 "type": "rebuild", 00:32:42.094 "target": "spare", 00:32:42.094 "progress": { 00:32:42.094 "blocks": 20480, 00:32:42.094 "percent": 31 00:32:42.094 } 00:32:42.094 }, 00:32:42.094 "base_bdevs_list": [ 00:32:42.094 { 00:32:42.094 "name": "spare", 00:32:42.094 "uuid": "ade674f1-9aa7-5c20-a549-f64c76f5fe49", 00:32:42.094 "is_configured": true, 00:32:42.094 "data_offset": 0, 00:32:42.094 "data_size": 65536 00:32:42.094 }, 00:32:42.094 { 00:32:42.094 "name": "BaseBdev2", 00:32:42.094 "uuid": "1446ebbb-de75-585d-8bef-c1151c85ef8e", 00:32:42.094 "is_configured": true, 00:32:42.094 "data_offset": 0, 00:32:42.094 "data_size": 65536 00:32:42.094 }, 00:32:42.094 { 00:32:42.094 "name": "BaseBdev3", 00:32:42.094 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:42.094 "is_configured": true, 00:32:42.094 "data_offset": 0, 00:32:42.094 "data_size": 65536 00:32:42.094 }, 00:32:42.094 { 00:32:42.094 "name": "BaseBdev4", 00:32:42.094 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:42.094 "is_configured": true, 00:32:42.094 "data_offset": 0, 00:32:42.094 "data_size": 65536 00:32:42.094 } 00:32:42.094 ] 00:32:42.094 }' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.094 [2024-12-06 18:31:12.796364] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:32:42.094 [2024-12-06 18:31:12.862085] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:42.094 "name": "raid_bdev1", 00:32:42.094 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:42.094 "strip_size_kb": 0, 00:32:42.094 "state": "online", 00:32:42.094 "raid_level": "raid1", 00:32:42.094 "superblock": false, 00:32:42.094 "num_base_bdevs": 4, 00:32:42.094 "num_base_bdevs_discovered": 3, 00:32:42.094 "num_base_bdevs_operational": 3, 00:32:42.094 "process": { 00:32:42.094 "type": "rebuild", 00:32:42.094 "target": "spare", 00:32:42.094 "progress": { 00:32:42.094 "blocks": 24576, 00:32:42.094 "percent": 37 00:32:42.094 } 00:32:42.094 }, 00:32:42.094 "base_bdevs_list": [ 00:32:42.094 { 00:32:42.094 "name": "spare", 00:32:42.094 "uuid": "ade674f1-9aa7-5c20-a549-f64c76f5fe49", 00:32:42.094 "is_configured": true, 00:32:42.094 "data_offset": 0, 00:32:42.094 "data_size": 65536 00:32:42.094 }, 00:32:42.094 { 00:32:42.094 "name": null, 00:32:42.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.094 "is_configured": false, 00:32:42.094 "data_offset": 0, 00:32:42.094 "data_size": 65536 00:32:42.094 }, 00:32:42.094 { 00:32:42.094 "name": "BaseBdev3", 00:32:42.094 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:42.094 "is_configured": true, 00:32:42.094 "data_offset": 0, 00:32:42.094 "data_size": 65536 00:32:42.094 }, 00:32:42.094 { 00:32:42.094 "name": "BaseBdev4", 00:32:42.094 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:42.094 "is_configured": true, 00:32:42.094 "data_offset": 0, 00:32:42.094 "data_size": 65536 00:32:42.094 } 00:32:42.094 ] 00:32:42.094 }' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:42.094 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:42.095 18:31:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:42.095 18:31:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.095 18:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:42.095 "name": "raid_bdev1", 00:32:42.095 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:42.095 "strip_size_kb": 0, 00:32:42.095 "state": "online", 00:32:42.095 "raid_level": "raid1", 00:32:42.095 "superblock": false, 00:32:42.095 "num_base_bdevs": 4, 00:32:42.095 "num_base_bdevs_discovered": 3, 00:32:42.095 "num_base_bdevs_operational": 3, 00:32:42.095 "process": { 00:32:42.095 "type": "rebuild", 00:32:42.095 "target": "spare", 00:32:42.095 "progress": { 00:32:42.095 "blocks": 26624, 00:32:42.095 "percent": 40 00:32:42.095 } 00:32:42.095 }, 00:32:42.095 "base_bdevs_list": [ 00:32:42.095 { 00:32:42.095 "name": "spare", 00:32:42.095 "uuid": "ade674f1-9aa7-5c20-a549-f64c76f5fe49", 00:32:42.095 "is_configured": true, 00:32:42.095 "data_offset": 0, 00:32:42.095 "data_size": 65536 00:32:42.095 }, 00:32:42.095 { 00:32:42.095 "name": null, 00:32:42.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:42.095 "is_configured": false, 00:32:42.095 "data_offset": 0, 00:32:42.095 "data_size": 65536 00:32:42.095 }, 00:32:42.095 { 00:32:42.095 "name": "BaseBdev3", 00:32:42.095 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:42.095 "is_configured": true, 00:32:42.095 "data_offset": 0, 00:32:42.095 "data_size": 65536 00:32:42.095 }, 00:32:42.095 { 00:32:42.095 "name": "BaseBdev4", 00:32:42.095 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:42.095 "is_configured": true, 00:32:42.095 "data_offset": 0, 00:32:42.095 "data_size": 65536 00:32:42.095 } 00:32:42.095 ] 00:32:42.095 }' 00:32:42.095 18:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:42.353 18:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:42.353 18:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:42.353 18:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:42.353 18:31:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:43.289 "name": "raid_bdev1", 00:32:43.289 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:43.289 "strip_size_kb": 0, 00:32:43.289 "state": "online", 00:32:43.289 "raid_level": "raid1", 00:32:43.289 "superblock": false, 00:32:43.289 "num_base_bdevs": 4, 00:32:43.289 "num_base_bdevs_discovered": 3, 00:32:43.289 "num_base_bdevs_operational": 3, 00:32:43.289 "process": { 00:32:43.289 "type": "rebuild", 00:32:43.289 "target": "spare", 00:32:43.289 "progress": { 00:32:43.289 "blocks": 49152, 00:32:43.289 "percent": 75 00:32:43.289 } 00:32:43.289 }, 00:32:43.289 "base_bdevs_list": [ 00:32:43.289 { 00:32:43.289 "name": "spare", 00:32:43.289 "uuid": "ade674f1-9aa7-5c20-a549-f64c76f5fe49", 00:32:43.289 "is_configured": true, 00:32:43.289 "data_offset": 0, 00:32:43.289 "data_size": 65536 00:32:43.289 }, 00:32:43.289 { 00:32:43.289 "name": null, 00:32:43.289 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:43.289 "is_configured": false, 00:32:43.289 "data_offset": 0, 00:32:43.289 "data_size": 65536 00:32:43.289 }, 00:32:43.289 { 00:32:43.289 "name": "BaseBdev3", 00:32:43.289 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:43.289 "is_configured": true, 00:32:43.289 "data_offset": 0, 00:32:43.289 "data_size": 65536 00:32:43.289 }, 00:32:43.289 { 00:32:43.289 "name": "BaseBdev4", 00:32:43.289 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:43.289 "is_configured": true, 00:32:43.289 "data_offset": 0, 00:32:43.289 "data_size": 65536 00:32:43.289 } 00:32:43.289 ] 00:32:43.289 }' 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:43.289 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:43.547 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:43.548 18:31:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:32:44.115 [2024-12-06 18:31:14.879715] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:32:44.115 [2024-12-06 18:31:14.879847] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:32:44.115 [2024-12-06 18:31:14.879915] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.373 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:44.631 "name": "raid_bdev1", 00:32:44.631 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:44.631 "strip_size_kb": 0, 00:32:44.631 "state": "online", 00:32:44.631 "raid_level": "raid1", 00:32:44.631 "superblock": false, 00:32:44.631 "num_base_bdevs": 4, 00:32:44.631 "num_base_bdevs_discovered": 3, 00:32:44.631 "num_base_bdevs_operational": 3, 00:32:44.631 "base_bdevs_list": [ 00:32:44.631 { 00:32:44.631 "name": "spare", 00:32:44.631 "uuid": "ade674f1-9aa7-5c20-a549-f64c76f5fe49", 00:32:44.631 "is_configured": true, 00:32:44.631 "data_offset": 0, 00:32:44.631 "data_size": 65536 00:32:44.631 }, 00:32:44.631 { 00:32:44.631 "name": null, 00:32:44.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.631 "is_configured": false, 00:32:44.631 "data_offset": 0, 00:32:44.631 "data_size": 65536 00:32:44.631 }, 00:32:44.631 { 00:32:44.631 "name": "BaseBdev3", 00:32:44.631 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:44.631 "is_configured": true, 00:32:44.631 "data_offset": 0, 00:32:44.631 "data_size": 65536 00:32:44.631 }, 00:32:44.631 { 00:32:44.631 "name": "BaseBdev4", 00:32:44.631 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:44.631 "is_configured": true, 00:32:44.631 "data_offset": 0, 00:32:44.631 "data_size": 65536 00:32:44.631 } 00:32:44.631 ] 00:32:44.631 }' 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:44.631 "name": "raid_bdev1", 00:32:44.631 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:44.631 "strip_size_kb": 0, 00:32:44.631 "state": "online", 00:32:44.631 "raid_level": "raid1", 00:32:44.631 "superblock": false, 00:32:44.631 "num_base_bdevs": 4, 00:32:44.631 "num_base_bdevs_discovered": 3, 00:32:44.631 "num_base_bdevs_operational": 3, 00:32:44.631 "base_bdevs_list": [ 00:32:44.631 { 00:32:44.631 "name": "spare", 00:32:44.631 "uuid": "ade674f1-9aa7-5c20-a549-f64c76f5fe49", 00:32:44.631 "is_configured": true, 00:32:44.631 "data_offset": 0, 00:32:44.631 "data_size": 65536 00:32:44.631 }, 00:32:44.631 { 00:32:44.631 "name": null, 00:32:44.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.631 "is_configured": false, 00:32:44.631 "data_offset": 0, 00:32:44.631 "data_size": 65536 00:32:44.631 }, 00:32:44.631 { 00:32:44.631 "name": "BaseBdev3", 00:32:44.631 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:44.631 "is_configured": true, 00:32:44.631 "data_offset": 0, 00:32:44.631 "data_size": 65536 00:32:44.631 }, 00:32:44.631 { 00:32:44.631 "name": "BaseBdev4", 00:32:44.631 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:44.631 "is_configured": true, 00:32:44.631 "data_offset": 0, 00:32:44.631 "data_size": 65536 00:32:44.631 } 00:32:44.631 ] 00:32:44.631 }' 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:44.631 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.632 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:44.888 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.888 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:44.888 "name": "raid_bdev1", 00:32:44.888 "uuid": "d409f0c7-bfe0-4602-961f-9c6b83271d5a", 00:32:44.888 "strip_size_kb": 0, 00:32:44.888 "state": "online", 00:32:44.888 "raid_level": "raid1", 00:32:44.888 "superblock": false, 00:32:44.888 "num_base_bdevs": 4, 00:32:44.889 "num_base_bdevs_discovered": 3, 00:32:44.889 "num_base_bdevs_operational": 3, 00:32:44.889 "base_bdevs_list": [ 00:32:44.889 { 00:32:44.889 "name": "spare", 00:32:44.889 "uuid": "ade674f1-9aa7-5c20-a549-f64c76f5fe49", 00:32:44.889 "is_configured": true, 00:32:44.889 "data_offset": 0, 00:32:44.889 "data_size": 65536 00:32:44.889 }, 00:32:44.889 { 00:32:44.889 "name": null, 00:32:44.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:44.889 "is_configured": false, 00:32:44.889 "data_offset": 0, 00:32:44.889 "data_size": 65536 00:32:44.889 }, 00:32:44.889 { 00:32:44.889 "name": "BaseBdev3", 00:32:44.889 "uuid": "a3b627c4-52a7-50e1-b231-25c7b7f8b56c", 00:32:44.889 "is_configured": true, 00:32:44.889 "data_offset": 0, 00:32:44.889 "data_size": 65536 00:32:44.889 }, 00:32:44.889 { 00:32:44.889 "name": "BaseBdev4", 00:32:44.889 "uuid": "cc79f831-60a0-5610-90c3-de5b874ac95e", 00:32:44.889 "is_configured": true, 00:32:44.889 "data_offset": 0, 00:32:44.889 "data_size": 65536 00:32:44.889 } 00:32:44.889 ] 00:32:44.889 }' 00:32:44.889 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:44.889 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.146 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:32:45.146 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.146 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.146 [2024-12-06 18:31:15.983647] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:32:45.146 [2024-12-06 18:31:15.983922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:32:45.146 [2024-12-06 18:31:15.984071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:32:45.146 [2024-12-06 18:31:15.984199] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:32:45.146 [2024-12-06 18:31:15.984214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:32:45.146 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.146 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:45.146 18:31:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:32:45.146 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.146 18:31:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:45.146 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:32:45.403 /dev/nbd0 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:45.403 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:45.404 1+0 records in 00:32:45.404 1+0 records out 00:32:45.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404026 s, 10.1 MB/s 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:45.404 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:32:45.662 /dev/nbd1 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:45.662 1+0 records in 00:32:45.662 1+0 records out 00:32:45.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530276 s, 7.7 MB/s 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:32:45.662 18:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:32:45.920 18:31:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:32:45.920 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:45.920 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:32:45.920 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:45.920 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:32:45.920 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:45.920 18:31:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:46.179 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77276 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77276 ']' 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77276 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77276 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77276' 00:32:46.438 killing process with pid 77276 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77276 00:32:46.438 Received shutdown signal, test time was about 60.000000 seconds 00:32:46.438 00:32:46.438 Latency(us) 00:32:46.438 [2024-12-06T18:31:17.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.438 [2024-12-06T18:31:17.387Z] =================================================================================================================== 00:32:46.438 [2024-12-06T18:31:17.387Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:32:46.438 18:31:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77276 00:32:46.438 [2024-12-06 18:31:17.351565] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:32:47.015 [2024-12-06 18:31:17.895813] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:32:48.403 00:32:48.403 real 0m18.591s 00:32:48.403 user 0m19.920s 00:32:48.403 sys 0m3.990s 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:48.403 ************************************ 00:32:48.403 END TEST raid_rebuild_test 00:32:48.403 ************************************ 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:32:48.403 18:31:19 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:32:48.403 18:31:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:32:48.403 18:31:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:48.403 18:31:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:32:48.403 ************************************ 00:32:48.403 START TEST raid_rebuild_test_sb 00:32:48.403 ************************************ 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:32:48.403 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=77735 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 77735 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 77735 ']' 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:48.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:48.404 18:31:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:48.404 I/O size of 3145728 is greater than zero copy threshold (65536). 00:32:48.404 Zero copy mechanism will not be used. 00:32:48.404 [2024-12-06 18:31:19.350510] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:32:48.404 [2024-12-06 18:31:19.350677] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77735 ] 00:32:48.662 [2024-12-06 18:31:19.526700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:48.921 [2024-12-06 18:31:19.670864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:49.179 [2024-12-06 18:31:19.916633] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:49.179 [2024-12-06 18:31:19.916988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.438 BaseBdev1_malloc 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.438 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.438 [2024-12-06 18:31:20.272677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:32:49.438 [2024-12-06 18:31:20.272782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.438 [2024-12-06 18:31:20.272814] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:32:49.438 [2024-12-06 18:31:20.272830] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.438 [2024-12-06 18:31:20.275798] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.438 [2024-12-06 18:31:20.276068] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:32:49.438 BaseBdev1 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.439 BaseBdev2_malloc 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.439 [2024-12-06 18:31:20.337860] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:32:49.439 [2024-12-06 18:31:20.337981] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.439 [2024-12-06 18:31:20.338016] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:32:49.439 [2024-12-06 18:31:20.338032] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.439 [2024-12-06 18:31:20.341046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.439 [2024-12-06 18:31:20.341108] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:32:49.439 BaseBdev2 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.439 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.697 BaseBdev3_malloc 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.697 [2024-12-06 18:31:20.413047] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:32:49.697 [2024-12-06 18:31:20.413366] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.697 [2024-12-06 18:31:20.413409] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:32:49.697 [2024-12-06 18:31:20.413427] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.697 [2024-12-06 18:31:20.416369] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.697 [2024-12-06 18:31:20.416422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:32:49.697 BaseBdev3 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.697 BaseBdev4_malloc 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.697 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.697 [2024-12-06 18:31:20.475084] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:32:49.697 [2024-12-06 18:31:20.475203] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.697 [2024-12-06 18:31:20.475232] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:32:49.697 [2024-12-06 18:31:20.475249] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.697 [2024-12-06 18:31:20.478163] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.697 [2024-12-06 18:31:20.478221] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:32:49.698 BaseBdev4 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.698 spare_malloc 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.698 spare_delay 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.698 [2024-12-06 18:31:20.550249] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:32:49.698 [2024-12-06 18:31:20.550581] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:32:49.698 [2024-12-06 18:31:20.550624] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:32:49.698 [2024-12-06 18:31:20.550641] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:32:49.698 [2024-12-06 18:31:20.553641] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:32:49.698 [2024-12-06 18:31:20.553854] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:32:49.698 spare 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.698 [2024-12-06 18:31:20.562325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:32:49.698 [2024-12-06 18:31:20.564873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:32:49.698 [2024-12-06 18:31:20.565097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:32:49.698 [2024-12-06 18:31:20.565192] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:32:49.698 [2024-12-06 18:31:20.565425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:32:49.698 [2024-12-06 18:31:20.565444] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:32:49.698 [2024-12-06 18:31:20.565801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:32:49.698 [2024-12-06 18:31:20.566017] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:32:49.698 [2024-12-06 18:31:20.566028] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:32:49.698 [2024-12-06 18:31:20.566323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:49.698 "name": "raid_bdev1", 00:32:49.698 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:32:49.698 "strip_size_kb": 0, 00:32:49.698 "state": "online", 00:32:49.698 "raid_level": "raid1", 00:32:49.698 "superblock": true, 00:32:49.698 "num_base_bdevs": 4, 00:32:49.698 "num_base_bdevs_discovered": 4, 00:32:49.698 "num_base_bdevs_operational": 4, 00:32:49.698 "base_bdevs_list": [ 00:32:49.698 { 00:32:49.698 "name": "BaseBdev1", 00:32:49.698 "uuid": "c6f4e649-945e-5109-bc42-8e55b935dc53", 00:32:49.698 "is_configured": true, 00:32:49.698 "data_offset": 2048, 00:32:49.698 "data_size": 63488 00:32:49.698 }, 00:32:49.698 { 00:32:49.698 "name": "BaseBdev2", 00:32:49.698 "uuid": "f4f8b744-b662-5bfb-8c47-432ef71df095", 00:32:49.698 "is_configured": true, 00:32:49.698 "data_offset": 2048, 00:32:49.698 "data_size": 63488 00:32:49.698 }, 00:32:49.698 { 00:32:49.698 "name": "BaseBdev3", 00:32:49.698 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:32:49.698 "is_configured": true, 00:32:49.698 "data_offset": 2048, 00:32:49.698 "data_size": 63488 00:32:49.698 }, 00:32:49.698 { 00:32:49.698 "name": "BaseBdev4", 00:32:49.698 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:32:49.698 "is_configured": true, 00:32:49.698 "data_offset": 2048, 00:32:49.698 "data_size": 63488 00:32:49.698 } 00:32:49.698 ] 00:32:49.698 }' 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:49.698 18:31:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:50.265 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:32:50.265 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:32:50.265 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.265 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:50.266 [2024-12-06 18:31:21.014009] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:50.266 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:32:50.524 [2024-12-06 18:31:21.317470] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:32:50.524 /dev/nbd0 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:32:50.524 1+0 records in 00:32:50.524 1+0 records out 00:32:50.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563228 s, 7.3 MB/s 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:32:50.524 18:31:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:32:57.087 63488+0 records in 00:32:57.087 63488+0 records out 00:32:57.087 32505856 bytes (33 MB, 31 MiB) copied, 6.22519 s, 5.2 MB/s 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:32:57.087 [2024-12-06 18:31:27.831700] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.087 [2024-12-06 18:31:27.867879] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:57.087 "name": "raid_bdev1", 00:32:57.087 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:32:57.087 "strip_size_kb": 0, 00:32:57.087 "state": "online", 00:32:57.087 "raid_level": "raid1", 00:32:57.087 "superblock": true, 00:32:57.087 "num_base_bdevs": 4, 00:32:57.087 "num_base_bdevs_discovered": 3, 00:32:57.087 "num_base_bdevs_operational": 3, 00:32:57.087 "base_bdevs_list": [ 00:32:57.087 { 00:32:57.087 "name": null, 00:32:57.087 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:57.087 "is_configured": false, 00:32:57.087 "data_offset": 0, 00:32:57.087 "data_size": 63488 00:32:57.087 }, 00:32:57.087 { 00:32:57.087 "name": "BaseBdev2", 00:32:57.087 "uuid": "f4f8b744-b662-5bfb-8c47-432ef71df095", 00:32:57.087 "is_configured": true, 00:32:57.087 "data_offset": 2048, 00:32:57.087 "data_size": 63488 00:32:57.087 }, 00:32:57.087 { 00:32:57.087 "name": "BaseBdev3", 00:32:57.087 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:32:57.087 "is_configured": true, 00:32:57.087 "data_offset": 2048, 00:32:57.087 "data_size": 63488 00:32:57.087 }, 00:32:57.087 { 00:32:57.087 "name": "BaseBdev4", 00:32:57.087 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:32:57.087 "is_configured": true, 00:32:57.087 "data_offset": 2048, 00:32:57.087 "data_size": 63488 00:32:57.087 } 00:32:57.087 ] 00:32:57.087 }' 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:57.087 18:31:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.656 18:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:57.656 18:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.656 18:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:57.656 [2024-12-06 18:31:28.315374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:57.657 [2024-12-06 18:31:28.332221] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:32:57.657 18:31:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.657 18:31:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:32:57.657 [2024-12-06 18:31:28.334875] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.593 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:58.593 "name": "raid_bdev1", 00:32:58.594 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:32:58.594 "strip_size_kb": 0, 00:32:58.594 "state": "online", 00:32:58.594 "raid_level": "raid1", 00:32:58.594 "superblock": true, 00:32:58.594 "num_base_bdevs": 4, 00:32:58.594 "num_base_bdevs_discovered": 4, 00:32:58.594 "num_base_bdevs_operational": 4, 00:32:58.594 "process": { 00:32:58.594 "type": "rebuild", 00:32:58.594 "target": "spare", 00:32:58.594 "progress": { 00:32:58.594 "blocks": 20480, 00:32:58.594 "percent": 32 00:32:58.594 } 00:32:58.594 }, 00:32:58.594 "base_bdevs_list": [ 00:32:58.594 { 00:32:58.594 "name": "spare", 00:32:58.594 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:32:58.594 "is_configured": true, 00:32:58.594 "data_offset": 2048, 00:32:58.594 "data_size": 63488 00:32:58.594 }, 00:32:58.594 { 00:32:58.594 "name": "BaseBdev2", 00:32:58.594 "uuid": "f4f8b744-b662-5bfb-8c47-432ef71df095", 00:32:58.594 "is_configured": true, 00:32:58.594 "data_offset": 2048, 00:32:58.594 "data_size": 63488 00:32:58.594 }, 00:32:58.594 { 00:32:58.594 "name": "BaseBdev3", 00:32:58.594 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:32:58.594 "is_configured": true, 00:32:58.594 "data_offset": 2048, 00:32:58.594 "data_size": 63488 00:32:58.594 }, 00:32:58.594 { 00:32:58.594 "name": "BaseBdev4", 00:32:58.594 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:32:58.594 "is_configured": true, 00:32:58.594 "data_offset": 2048, 00:32:58.594 "data_size": 63488 00:32:58.594 } 00:32:58.594 ] 00:32:58.594 }' 00:32:58.594 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:58.594 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:32:58.594 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:58.594 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:32:58.594 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:32:58.594 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.594 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:58.594 [2024-12-06 18:31:29.490802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:58.853 [2024-12-06 18:31:29.546733] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:32:58.853 [2024-12-06 18:31:29.547102] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:32:58.853 [2024-12-06 18:31:29.547230] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:32:58.853 [2024-12-06 18:31:29.547282] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:32:58.853 "name": "raid_bdev1", 00:32:58.853 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:32:58.853 "strip_size_kb": 0, 00:32:58.853 "state": "online", 00:32:58.853 "raid_level": "raid1", 00:32:58.853 "superblock": true, 00:32:58.853 "num_base_bdevs": 4, 00:32:58.853 "num_base_bdevs_discovered": 3, 00:32:58.853 "num_base_bdevs_operational": 3, 00:32:58.853 "base_bdevs_list": [ 00:32:58.853 { 00:32:58.853 "name": null, 00:32:58.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:58.853 "is_configured": false, 00:32:58.853 "data_offset": 0, 00:32:58.853 "data_size": 63488 00:32:58.853 }, 00:32:58.853 { 00:32:58.853 "name": "BaseBdev2", 00:32:58.853 "uuid": "f4f8b744-b662-5bfb-8c47-432ef71df095", 00:32:58.853 "is_configured": true, 00:32:58.853 "data_offset": 2048, 00:32:58.853 "data_size": 63488 00:32:58.853 }, 00:32:58.853 { 00:32:58.853 "name": "BaseBdev3", 00:32:58.853 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:32:58.853 "is_configured": true, 00:32:58.853 "data_offset": 2048, 00:32:58.853 "data_size": 63488 00:32:58.853 }, 00:32:58.853 { 00:32:58.853 "name": "BaseBdev4", 00:32:58.853 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:32:58.853 "is_configured": true, 00:32:58.853 "data_offset": 2048, 00:32:58.853 "data_size": 63488 00:32:58.853 } 00:32:58.853 ] 00:32:58.853 }' 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:32:58.853 18:31:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.112 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:32:59.112 "name": "raid_bdev1", 00:32:59.112 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:32:59.112 "strip_size_kb": 0, 00:32:59.112 "state": "online", 00:32:59.112 "raid_level": "raid1", 00:32:59.112 "superblock": true, 00:32:59.112 "num_base_bdevs": 4, 00:32:59.112 "num_base_bdevs_discovered": 3, 00:32:59.112 "num_base_bdevs_operational": 3, 00:32:59.112 "base_bdevs_list": [ 00:32:59.112 { 00:32:59.112 "name": null, 00:32:59.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:32:59.112 "is_configured": false, 00:32:59.112 "data_offset": 0, 00:32:59.112 "data_size": 63488 00:32:59.112 }, 00:32:59.112 { 00:32:59.112 "name": "BaseBdev2", 00:32:59.112 "uuid": "f4f8b744-b662-5bfb-8c47-432ef71df095", 00:32:59.112 "is_configured": true, 00:32:59.112 "data_offset": 2048, 00:32:59.112 "data_size": 63488 00:32:59.112 }, 00:32:59.112 { 00:32:59.112 "name": "BaseBdev3", 00:32:59.112 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:32:59.112 "is_configured": true, 00:32:59.112 "data_offset": 2048, 00:32:59.112 "data_size": 63488 00:32:59.112 }, 00:32:59.112 { 00:32:59.112 "name": "BaseBdev4", 00:32:59.112 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:32:59.112 "is_configured": true, 00:32:59.112 "data_offset": 2048, 00:32:59.112 "data_size": 63488 00:32:59.112 } 00:32:59.112 ] 00:32:59.112 }' 00:32:59.370 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:32:59.370 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:32:59.370 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:32:59.370 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:32:59.370 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:32:59.370 18:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.370 18:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:32:59.370 [2024-12-06 18:31:30.152434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:32:59.371 [2024-12-06 18:31:30.166888] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:32:59.371 18:31:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.371 18:31:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:32:59.371 [2024-12-06 18:31:30.169768] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:00.307 "name": "raid_bdev1", 00:33:00.307 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:00.307 "strip_size_kb": 0, 00:33:00.307 "state": "online", 00:33:00.307 "raid_level": "raid1", 00:33:00.307 "superblock": true, 00:33:00.307 "num_base_bdevs": 4, 00:33:00.307 "num_base_bdevs_discovered": 4, 00:33:00.307 "num_base_bdevs_operational": 4, 00:33:00.307 "process": { 00:33:00.307 "type": "rebuild", 00:33:00.307 "target": "spare", 00:33:00.307 "progress": { 00:33:00.307 "blocks": 20480, 00:33:00.307 "percent": 32 00:33:00.307 } 00:33:00.307 }, 00:33:00.307 "base_bdevs_list": [ 00:33:00.307 { 00:33:00.307 "name": "spare", 00:33:00.307 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:00.307 "is_configured": true, 00:33:00.307 "data_offset": 2048, 00:33:00.307 "data_size": 63488 00:33:00.307 }, 00:33:00.307 { 00:33:00.307 "name": "BaseBdev2", 00:33:00.307 "uuid": "f4f8b744-b662-5bfb-8c47-432ef71df095", 00:33:00.307 "is_configured": true, 00:33:00.307 "data_offset": 2048, 00:33:00.307 "data_size": 63488 00:33:00.307 }, 00:33:00.307 { 00:33:00.307 "name": "BaseBdev3", 00:33:00.307 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:00.307 "is_configured": true, 00:33:00.307 "data_offset": 2048, 00:33:00.307 "data_size": 63488 00:33:00.307 }, 00:33:00.307 { 00:33:00.307 "name": "BaseBdev4", 00:33:00.307 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:00.307 "is_configured": true, 00:33:00.307 "data_offset": 2048, 00:33:00.307 "data_size": 63488 00:33:00.307 } 00:33:00.307 ] 00:33:00.307 }' 00:33:00.307 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:00.567 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:00.567 [2024-12-06 18:31:31.305841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:00.567 [2024-12-06 18:31:31.480664] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.567 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:00.826 "name": "raid_bdev1", 00:33:00.826 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:00.826 "strip_size_kb": 0, 00:33:00.826 "state": "online", 00:33:00.826 "raid_level": "raid1", 00:33:00.826 "superblock": true, 00:33:00.826 "num_base_bdevs": 4, 00:33:00.826 "num_base_bdevs_discovered": 3, 00:33:00.826 "num_base_bdevs_operational": 3, 00:33:00.826 "process": { 00:33:00.826 "type": "rebuild", 00:33:00.826 "target": "spare", 00:33:00.826 "progress": { 00:33:00.826 "blocks": 24576, 00:33:00.826 "percent": 38 00:33:00.826 } 00:33:00.826 }, 00:33:00.826 "base_bdevs_list": [ 00:33:00.826 { 00:33:00.826 "name": "spare", 00:33:00.826 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:00.826 "is_configured": true, 00:33:00.826 "data_offset": 2048, 00:33:00.826 "data_size": 63488 00:33:00.826 }, 00:33:00.826 { 00:33:00.826 "name": null, 00:33:00.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.826 "is_configured": false, 00:33:00.826 "data_offset": 0, 00:33:00.826 "data_size": 63488 00:33:00.826 }, 00:33:00.826 { 00:33:00.826 "name": "BaseBdev3", 00:33:00.826 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:00.826 "is_configured": true, 00:33:00.826 "data_offset": 2048, 00:33:00.826 "data_size": 63488 00:33:00.826 }, 00:33:00.826 { 00:33:00.826 "name": "BaseBdev4", 00:33:00.826 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:00.826 "is_configured": true, 00:33:00.826 "data_offset": 2048, 00:33:00.826 "data_size": 63488 00:33:00.826 } 00:33:00.826 ] 00:33:00.826 }' 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=468 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:00.826 "name": "raid_bdev1", 00:33:00.826 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:00.826 "strip_size_kb": 0, 00:33:00.826 "state": "online", 00:33:00.826 "raid_level": "raid1", 00:33:00.826 "superblock": true, 00:33:00.826 "num_base_bdevs": 4, 00:33:00.826 "num_base_bdevs_discovered": 3, 00:33:00.826 "num_base_bdevs_operational": 3, 00:33:00.826 "process": { 00:33:00.826 "type": "rebuild", 00:33:00.826 "target": "spare", 00:33:00.826 "progress": { 00:33:00.826 "blocks": 26624, 00:33:00.826 "percent": 41 00:33:00.826 } 00:33:00.826 }, 00:33:00.826 "base_bdevs_list": [ 00:33:00.826 { 00:33:00.826 "name": "spare", 00:33:00.826 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:00.826 "is_configured": true, 00:33:00.826 "data_offset": 2048, 00:33:00.826 "data_size": 63488 00:33:00.826 }, 00:33:00.826 { 00:33:00.826 "name": null, 00:33:00.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:00.826 "is_configured": false, 00:33:00.826 "data_offset": 0, 00:33:00.826 "data_size": 63488 00:33:00.826 }, 00:33:00.826 { 00:33:00.826 "name": "BaseBdev3", 00:33:00.826 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:00.826 "is_configured": true, 00:33:00.826 "data_offset": 2048, 00:33:00.826 "data_size": 63488 00:33:00.826 }, 00:33:00.826 { 00:33:00.826 "name": "BaseBdev4", 00:33:00.826 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:00.826 "is_configured": true, 00:33:00.826 "data_offset": 2048, 00:33:00.826 "data_size": 63488 00:33:00.826 } 00:33:00.826 ] 00:33:00.826 }' 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:00.826 18:31:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:02.199 "name": "raid_bdev1", 00:33:02.199 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:02.199 "strip_size_kb": 0, 00:33:02.199 "state": "online", 00:33:02.199 "raid_level": "raid1", 00:33:02.199 "superblock": true, 00:33:02.199 "num_base_bdevs": 4, 00:33:02.199 "num_base_bdevs_discovered": 3, 00:33:02.199 "num_base_bdevs_operational": 3, 00:33:02.199 "process": { 00:33:02.199 "type": "rebuild", 00:33:02.199 "target": "spare", 00:33:02.199 "progress": { 00:33:02.199 "blocks": 49152, 00:33:02.199 "percent": 77 00:33:02.199 } 00:33:02.199 }, 00:33:02.199 "base_bdevs_list": [ 00:33:02.199 { 00:33:02.199 "name": "spare", 00:33:02.199 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:02.199 "is_configured": true, 00:33:02.199 "data_offset": 2048, 00:33:02.199 "data_size": 63488 00:33:02.199 }, 00:33:02.199 { 00:33:02.199 "name": null, 00:33:02.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:02.199 "is_configured": false, 00:33:02.199 "data_offset": 0, 00:33:02.199 "data_size": 63488 00:33:02.199 }, 00:33:02.199 { 00:33:02.199 "name": "BaseBdev3", 00:33:02.199 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:02.199 "is_configured": true, 00:33:02.199 "data_offset": 2048, 00:33:02.199 "data_size": 63488 00:33:02.199 }, 00:33:02.199 { 00:33:02.199 "name": "BaseBdev4", 00:33:02.199 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:02.199 "is_configured": true, 00:33:02.199 "data_offset": 2048, 00:33:02.199 "data_size": 63488 00:33:02.199 } 00:33:02.199 ] 00:33:02.199 }' 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:02.199 18:31:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:02.457 [2024-12-06 18:31:33.398729] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:02.457 [2024-12-06 18:31:33.399167] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:02.457 [2024-12-06 18:31:33.399374] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:03.022 "name": "raid_bdev1", 00:33:03.022 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:03.022 "strip_size_kb": 0, 00:33:03.022 "state": "online", 00:33:03.022 "raid_level": "raid1", 00:33:03.022 "superblock": true, 00:33:03.022 "num_base_bdevs": 4, 00:33:03.022 "num_base_bdevs_discovered": 3, 00:33:03.022 "num_base_bdevs_operational": 3, 00:33:03.022 "base_bdevs_list": [ 00:33:03.022 { 00:33:03.022 "name": "spare", 00:33:03.022 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:03.022 "is_configured": true, 00:33:03.022 "data_offset": 2048, 00:33:03.022 "data_size": 63488 00:33:03.022 }, 00:33:03.022 { 00:33:03.022 "name": null, 00:33:03.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.022 "is_configured": false, 00:33:03.022 "data_offset": 0, 00:33:03.022 "data_size": 63488 00:33:03.022 }, 00:33:03.022 { 00:33:03.022 "name": "BaseBdev3", 00:33:03.022 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:03.022 "is_configured": true, 00:33:03.022 "data_offset": 2048, 00:33:03.022 "data_size": 63488 00:33:03.022 }, 00:33:03.022 { 00:33:03.022 "name": "BaseBdev4", 00:33:03.022 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:03.022 "is_configured": true, 00:33:03.022 "data_offset": 2048, 00:33:03.022 "data_size": 63488 00:33:03.022 } 00:33:03.022 ] 00:33:03.022 }' 00:33:03.022 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:03.280 18:31:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:03.280 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:03.280 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:03.280 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:33:03.280 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:03.280 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:03.280 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:03.280 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:03.280 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:03.281 "name": "raid_bdev1", 00:33:03.281 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:03.281 "strip_size_kb": 0, 00:33:03.281 "state": "online", 00:33:03.281 "raid_level": "raid1", 00:33:03.281 "superblock": true, 00:33:03.281 "num_base_bdevs": 4, 00:33:03.281 "num_base_bdevs_discovered": 3, 00:33:03.281 "num_base_bdevs_operational": 3, 00:33:03.281 "base_bdevs_list": [ 00:33:03.281 { 00:33:03.281 "name": "spare", 00:33:03.281 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:03.281 "is_configured": true, 00:33:03.281 "data_offset": 2048, 00:33:03.281 "data_size": 63488 00:33:03.281 }, 00:33:03.281 { 00:33:03.281 "name": null, 00:33:03.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.281 "is_configured": false, 00:33:03.281 "data_offset": 0, 00:33:03.281 "data_size": 63488 00:33:03.281 }, 00:33:03.281 { 00:33:03.281 "name": "BaseBdev3", 00:33:03.281 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:03.281 "is_configured": true, 00:33:03.281 "data_offset": 2048, 00:33:03.281 "data_size": 63488 00:33:03.281 }, 00:33:03.281 { 00:33:03.281 "name": "BaseBdev4", 00:33:03.281 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:03.281 "is_configured": true, 00:33:03.281 "data_offset": 2048, 00:33:03.281 "data_size": 63488 00:33:03.281 } 00:33:03.281 ] 00:33:03.281 }' 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:03.281 "name": "raid_bdev1", 00:33:03.281 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:03.281 "strip_size_kb": 0, 00:33:03.281 "state": "online", 00:33:03.281 "raid_level": "raid1", 00:33:03.281 "superblock": true, 00:33:03.281 "num_base_bdevs": 4, 00:33:03.281 "num_base_bdevs_discovered": 3, 00:33:03.281 "num_base_bdevs_operational": 3, 00:33:03.281 "base_bdevs_list": [ 00:33:03.281 { 00:33:03.281 "name": "spare", 00:33:03.281 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:03.281 "is_configured": true, 00:33:03.281 "data_offset": 2048, 00:33:03.281 "data_size": 63488 00:33:03.281 }, 00:33:03.281 { 00:33:03.281 "name": null, 00:33:03.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:03.281 "is_configured": false, 00:33:03.281 "data_offset": 0, 00:33:03.281 "data_size": 63488 00:33:03.281 }, 00:33:03.281 { 00:33:03.281 "name": "BaseBdev3", 00:33:03.281 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:03.281 "is_configured": true, 00:33:03.281 "data_offset": 2048, 00:33:03.281 "data_size": 63488 00:33:03.281 }, 00:33:03.281 { 00:33:03.281 "name": "BaseBdev4", 00:33:03.281 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:03.281 "is_configured": true, 00:33:03.281 "data_offset": 2048, 00:33:03.281 "data_size": 63488 00:33:03.281 } 00:33:03.281 ] 00:33:03.281 }' 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:03.281 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.872 [2024-12-06 18:31:34.559540] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:03.872 [2024-12-06 18:31:34.559594] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:03.872 [2024-12-06 18:31:34.559715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:03.872 [2024-12-06 18:31:34.559817] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:03.872 [2024-12-06 18:31:34.559831] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:03.872 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:33:04.129 /dev/nbd0 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:04.129 1+0 records in 00:33:04.129 1+0 records out 00:33:04.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424641 s, 9.6 MB/s 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:04.129 18:31:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:33:04.386 /dev/nbd1 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:04.386 1+0 records in 00:33:04.386 1+0 records out 00:33:04.386 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613346 s, 6.7 MB/s 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:33:04.386 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:04.644 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:33:04.644 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:04.644 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:33:04.644 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:04.644 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:33:04.644 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:04.644 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:04.899 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.158 [2024-12-06 18:31:35.873461] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:05.158 [2024-12-06 18:31:35.873545] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:05.158 [2024-12-06 18:31:35.873577] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:33:05.158 [2024-12-06 18:31:35.873591] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:05.158 [2024-12-06 18:31:35.876615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:05.158 [2024-12-06 18:31:35.876662] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:05.158 [2024-12-06 18:31:35.876797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:05.158 [2024-12-06 18:31:35.876857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:05.158 [2024-12-06 18:31:35.877028] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:05.158 [2024-12-06 18:31:35.877129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:05.158 spare 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.158 [2024-12-06 18:31:35.977114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:05.158 [2024-12-06 18:31:35.977353] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:05.158 [2024-12-06 18:31:35.977837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:33:05.158 [2024-12-06 18:31:35.978108] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:05.158 [2024-12-06 18:31:35.978124] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:05.158 [2024-12-06 18:31:35.978416] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.158 18:31:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.158 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.158 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.158 "name": "raid_bdev1", 00:33:05.158 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:05.158 "strip_size_kb": 0, 00:33:05.158 "state": "online", 00:33:05.158 "raid_level": "raid1", 00:33:05.158 "superblock": true, 00:33:05.158 "num_base_bdevs": 4, 00:33:05.158 "num_base_bdevs_discovered": 3, 00:33:05.158 "num_base_bdevs_operational": 3, 00:33:05.158 "base_bdevs_list": [ 00:33:05.158 { 00:33:05.158 "name": "spare", 00:33:05.158 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:05.158 "is_configured": true, 00:33:05.158 "data_offset": 2048, 00:33:05.158 "data_size": 63488 00:33:05.158 }, 00:33:05.158 { 00:33:05.158 "name": null, 00:33:05.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.158 "is_configured": false, 00:33:05.158 "data_offset": 2048, 00:33:05.158 "data_size": 63488 00:33:05.158 }, 00:33:05.158 { 00:33:05.158 "name": "BaseBdev3", 00:33:05.158 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:05.158 "is_configured": true, 00:33:05.158 "data_offset": 2048, 00:33:05.158 "data_size": 63488 00:33:05.158 }, 00:33:05.158 { 00:33:05.158 "name": "BaseBdev4", 00:33:05.158 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:05.158 "is_configured": true, 00:33:05.158 "data_offset": 2048, 00:33:05.158 "data_size": 63488 00:33:05.158 } 00:33:05.158 ] 00:33:05.158 }' 00:33:05.158 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.158 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:05.726 "name": "raid_bdev1", 00:33:05.726 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:05.726 "strip_size_kb": 0, 00:33:05.726 "state": "online", 00:33:05.726 "raid_level": "raid1", 00:33:05.726 "superblock": true, 00:33:05.726 "num_base_bdevs": 4, 00:33:05.726 "num_base_bdevs_discovered": 3, 00:33:05.726 "num_base_bdevs_operational": 3, 00:33:05.726 "base_bdevs_list": [ 00:33:05.726 { 00:33:05.726 "name": "spare", 00:33:05.726 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:05.726 "is_configured": true, 00:33:05.726 "data_offset": 2048, 00:33:05.726 "data_size": 63488 00:33:05.726 }, 00:33:05.726 { 00:33:05.726 "name": null, 00:33:05.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.726 "is_configured": false, 00:33:05.726 "data_offset": 2048, 00:33:05.726 "data_size": 63488 00:33:05.726 }, 00:33:05.726 { 00:33:05.726 "name": "BaseBdev3", 00:33:05.726 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:05.726 "is_configured": true, 00:33:05.726 "data_offset": 2048, 00:33:05.726 "data_size": 63488 00:33:05.726 }, 00:33:05.726 { 00:33:05.726 "name": "BaseBdev4", 00:33:05.726 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:05.726 "is_configured": true, 00:33:05.726 "data_offset": 2048, 00:33:05.726 "data_size": 63488 00:33:05.726 } 00:33:05.726 ] 00:33:05.726 }' 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.726 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.727 [2024-12-06 18:31:36.541748] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:05.727 "name": "raid_bdev1", 00:33:05.727 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:05.727 "strip_size_kb": 0, 00:33:05.727 "state": "online", 00:33:05.727 "raid_level": "raid1", 00:33:05.727 "superblock": true, 00:33:05.727 "num_base_bdevs": 4, 00:33:05.727 "num_base_bdevs_discovered": 2, 00:33:05.727 "num_base_bdevs_operational": 2, 00:33:05.727 "base_bdevs_list": [ 00:33:05.727 { 00:33:05.727 "name": null, 00:33:05.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.727 "is_configured": false, 00:33:05.727 "data_offset": 0, 00:33:05.727 "data_size": 63488 00:33:05.727 }, 00:33:05.727 { 00:33:05.727 "name": null, 00:33:05.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:05.727 "is_configured": false, 00:33:05.727 "data_offset": 2048, 00:33:05.727 "data_size": 63488 00:33:05.727 }, 00:33:05.727 { 00:33:05.727 "name": "BaseBdev3", 00:33:05.727 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:05.727 "is_configured": true, 00:33:05.727 "data_offset": 2048, 00:33:05.727 "data_size": 63488 00:33:05.727 }, 00:33:05.727 { 00:33:05.727 "name": "BaseBdev4", 00:33:05.727 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:05.727 "is_configured": true, 00:33:05.727 "data_offset": 2048, 00:33:05.727 "data_size": 63488 00:33:05.727 } 00:33:05.727 ] 00:33:05.727 }' 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:05.727 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:06.296 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:06.296 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.296 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:06.296 [2024-12-06 18:31:36.977346] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:06.296 [2024-12-06 18:31:36.977792] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:33:06.296 [2024-12-06 18:31:36.977821] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:06.296 [2024-12-06 18:31:36.977874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:06.296 [2024-12-06 18:31:36.993276] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:33:06.296 18:31:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.296 18:31:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:06.296 [2024-12-06 18:31:36.995750] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:07.235 18:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:07.235 18:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:07.235 18:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:07.235 18:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:07.235 18:31:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:07.235 "name": "raid_bdev1", 00:33:07.235 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:07.235 "strip_size_kb": 0, 00:33:07.235 "state": "online", 00:33:07.235 "raid_level": "raid1", 00:33:07.235 "superblock": true, 00:33:07.235 "num_base_bdevs": 4, 00:33:07.235 "num_base_bdevs_discovered": 3, 00:33:07.235 "num_base_bdevs_operational": 3, 00:33:07.235 "process": { 00:33:07.235 "type": "rebuild", 00:33:07.235 "target": "spare", 00:33:07.235 "progress": { 00:33:07.235 "blocks": 20480, 00:33:07.235 "percent": 32 00:33:07.235 } 00:33:07.235 }, 00:33:07.235 "base_bdevs_list": [ 00:33:07.235 { 00:33:07.235 "name": "spare", 00:33:07.235 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:07.235 "is_configured": true, 00:33:07.235 "data_offset": 2048, 00:33:07.235 "data_size": 63488 00:33:07.235 }, 00:33:07.235 { 00:33:07.235 "name": null, 00:33:07.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.235 "is_configured": false, 00:33:07.235 "data_offset": 2048, 00:33:07.235 "data_size": 63488 00:33:07.235 }, 00:33:07.235 { 00:33:07.235 "name": "BaseBdev3", 00:33:07.235 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:07.235 "is_configured": true, 00:33:07.235 "data_offset": 2048, 00:33:07.235 "data_size": 63488 00:33:07.235 }, 00:33:07.235 { 00:33:07.235 "name": "BaseBdev4", 00:33:07.235 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:07.235 "is_configured": true, 00:33:07.235 "data_offset": 2048, 00:33:07.235 "data_size": 63488 00:33:07.235 } 00:33:07.235 ] 00:33:07.235 }' 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.235 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:07.235 [2024-12-06 18:31:38.127256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:07.494 [2024-12-06 18:31:38.205522] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:07.494 [2024-12-06 18:31:38.205602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:07.494 [2024-12-06 18:31:38.205626] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:07.494 [2024-12-06 18:31:38.205636] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:07.494 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.494 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:07.494 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:07.494 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:07.494 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:07.494 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:07.494 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:07.495 "name": "raid_bdev1", 00:33:07.495 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:07.495 "strip_size_kb": 0, 00:33:07.495 "state": "online", 00:33:07.495 "raid_level": "raid1", 00:33:07.495 "superblock": true, 00:33:07.495 "num_base_bdevs": 4, 00:33:07.495 "num_base_bdevs_discovered": 2, 00:33:07.495 "num_base_bdevs_operational": 2, 00:33:07.495 "base_bdevs_list": [ 00:33:07.495 { 00:33:07.495 "name": null, 00:33:07.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.495 "is_configured": false, 00:33:07.495 "data_offset": 0, 00:33:07.495 "data_size": 63488 00:33:07.495 }, 00:33:07.495 { 00:33:07.495 "name": null, 00:33:07.495 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:07.495 "is_configured": false, 00:33:07.495 "data_offset": 2048, 00:33:07.495 "data_size": 63488 00:33:07.495 }, 00:33:07.495 { 00:33:07.495 "name": "BaseBdev3", 00:33:07.495 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:07.495 "is_configured": true, 00:33:07.495 "data_offset": 2048, 00:33:07.495 "data_size": 63488 00:33:07.495 }, 00:33:07.495 { 00:33:07.495 "name": "BaseBdev4", 00:33:07.495 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:07.495 "is_configured": true, 00:33:07.495 "data_offset": 2048, 00:33:07.495 "data_size": 63488 00:33:07.495 } 00:33:07.495 ] 00:33:07.495 }' 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:07.495 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:07.754 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:07.754 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.754 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:07.754 [2024-12-06 18:31:38.650746] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:07.754 [2024-12-06 18:31:38.650836] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:07.754 [2024-12-06 18:31:38.650879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:33:07.754 [2024-12-06 18:31:38.650893] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:07.754 [2024-12-06 18:31:38.651518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:07.754 [2024-12-06 18:31:38.651721] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:07.754 [2024-12-06 18:31:38.651887] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:07.754 [2024-12-06 18:31:38.651906] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:33:07.754 [2024-12-06 18:31:38.651926] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:07.754 [2024-12-06 18:31:38.651967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:07.754 [2024-12-06 18:31:38.667503] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:33:07.754 spare 00:33:07.754 18:31:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.754 18:31:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:07.754 [2024-12-06 18:31:38.669947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:09.135 "name": "raid_bdev1", 00:33:09.135 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:09.135 "strip_size_kb": 0, 00:33:09.135 "state": "online", 00:33:09.135 "raid_level": "raid1", 00:33:09.135 "superblock": true, 00:33:09.135 "num_base_bdevs": 4, 00:33:09.135 "num_base_bdevs_discovered": 3, 00:33:09.135 "num_base_bdevs_operational": 3, 00:33:09.135 "process": { 00:33:09.135 "type": "rebuild", 00:33:09.135 "target": "spare", 00:33:09.135 "progress": { 00:33:09.135 "blocks": 20480, 00:33:09.135 "percent": 32 00:33:09.135 } 00:33:09.135 }, 00:33:09.135 "base_bdevs_list": [ 00:33:09.135 { 00:33:09.135 "name": "spare", 00:33:09.135 "uuid": "5facc55b-1498-5360-bec9-2c2a8443244b", 00:33:09.135 "is_configured": true, 00:33:09.135 "data_offset": 2048, 00:33:09.135 "data_size": 63488 00:33:09.135 }, 00:33:09.135 { 00:33:09.135 "name": null, 00:33:09.135 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.135 "is_configured": false, 00:33:09.135 "data_offset": 2048, 00:33:09.135 "data_size": 63488 00:33:09.135 }, 00:33:09.135 { 00:33:09.135 "name": "BaseBdev3", 00:33:09.135 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:09.135 "is_configured": true, 00:33:09.135 "data_offset": 2048, 00:33:09.135 "data_size": 63488 00:33:09.135 }, 00:33:09.135 { 00:33:09.135 "name": "BaseBdev4", 00:33:09.135 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:09.135 "is_configured": true, 00:33:09.135 "data_offset": 2048, 00:33:09.135 "data_size": 63488 00:33:09.135 } 00:33:09.135 ] 00:33:09.135 }' 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:09.135 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.136 [2024-12-06 18:31:39.826160] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:09.136 [2024-12-06 18:31:39.879887] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:09.136 [2024-12-06 18:31:39.880258] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:09.136 [2024-12-06 18:31:39.880286] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:09.136 [2024-12-06 18:31:39.880302] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:09.136 "name": "raid_bdev1", 00:33:09.136 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:09.136 "strip_size_kb": 0, 00:33:09.136 "state": "online", 00:33:09.136 "raid_level": "raid1", 00:33:09.136 "superblock": true, 00:33:09.136 "num_base_bdevs": 4, 00:33:09.136 "num_base_bdevs_discovered": 2, 00:33:09.136 "num_base_bdevs_operational": 2, 00:33:09.136 "base_bdevs_list": [ 00:33:09.136 { 00:33:09.136 "name": null, 00:33:09.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.136 "is_configured": false, 00:33:09.136 "data_offset": 0, 00:33:09.136 "data_size": 63488 00:33:09.136 }, 00:33:09.136 { 00:33:09.136 "name": null, 00:33:09.136 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.136 "is_configured": false, 00:33:09.136 "data_offset": 2048, 00:33:09.136 "data_size": 63488 00:33:09.136 }, 00:33:09.136 { 00:33:09.136 "name": "BaseBdev3", 00:33:09.136 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:09.136 "is_configured": true, 00:33:09.136 "data_offset": 2048, 00:33:09.136 "data_size": 63488 00:33:09.136 }, 00:33:09.136 { 00:33:09.136 "name": "BaseBdev4", 00:33:09.136 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:09.136 "is_configured": true, 00:33:09.136 "data_offset": 2048, 00:33:09.136 "data_size": 63488 00:33:09.136 } 00:33:09.136 ] 00:33:09.136 }' 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:09.136 18:31:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:09.704 "name": "raid_bdev1", 00:33:09.704 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:09.704 "strip_size_kb": 0, 00:33:09.704 "state": "online", 00:33:09.704 "raid_level": "raid1", 00:33:09.704 "superblock": true, 00:33:09.704 "num_base_bdevs": 4, 00:33:09.704 "num_base_bdevs_discovered": 2, 00:33:09.704 "num_base_bdevs_operational": 2, 00:33:09.704 "base_bdevs_list": [ 00:33:09.704 { 00:33:09.704 "name": null, 00:33:09.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.704 "is_configured": false, 00:33:09.704 "data_offset": 0, 00:33:09.704 "data_size": 63488 00:33:09.704 }, 00:33:09.704 { 00:33:09.704 "name": null, 00:33:09.704 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:09.704 "is_configured": false, 00:33:09.704 "data_offset": 2048, 00:33:09.704 "data_size": 63488 00:33:09.704 }, 00:33:09.704 { 00:33:09.704 "name": "BaseBdev3", 00:33:09.704 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:09.704 "is_configured": true, 00:33:09.704 "data_offset": 2048, 00:33:09.704 "data_size": 63488 00:33:09.704 }, 00:33:09.704 { 00:33:09.704 "name": "BaseBdev4", 00:33:09.704 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:09.704 "is_configured": true, 00:33:09.704 "data_offset": 2048, 00:33:09.704 "data_size": 63488 00:33:09.704 } 00:33:09.704 ] 00:33:09.704 }' 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:09.704 [2024-12-06 18:31:40.505245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:09.704 [2024-12-06 18:31:40.505442] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:09.704 [2024-12-06 18:31:40.505479] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:33:09.704 [2024-12-06 18:31:40.505496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:09.704 [2024-12-06 18:31:40.506075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:09.704 [2024-12-06 18:31:40.506101] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:09.704 [2024-12-06 18:31:40.506225] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:09.704 [2024-12-06 18:31:40.506248] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:33:09.704 [2024-12-06 18:31:40.506259] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:09.704 [2024-12-06 18:31:40.506290] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:09.704 BaseBdev1 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.704 18:31:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:10.643 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:10.643 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:10.643 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:10.643 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:10.643 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:10.643 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:10.643 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:10.644 "name": "raid_bdev1", 00:33:10.644 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:10.644 "strip_size_kb": 0, 00:33:10.644 "state": "online", 00:33:10.644 "raid_level": "raid1", 00:33:10.644 "superblock": true, 00:33:10.644 "num_base_bdevs": 4, 00:33:10.644 "num_base_bdevs_discovered": 2, 00:33:10.644 "num_base_bdevs_operational": 2, 00:33:10.644 "base_bdevs_list": [ 00:33:10.644 { 00:33:10.644 "name": null, 00:33:10.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.644 "is_configured": false, 00:33:10.644 "data_offset": 0, 00:33:10.644 "data_size": 63488 00:33:10.644 }, 00:33:10.644 { 00:33:10.644 "name": null, 00:33:10.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:10.644 "is_configured": false, 00:33:10.644 "data_offset": 2048, 00:33:10.644 "data_size": 63488 00:33:10.644 }, 00:33:10.644 { 00:33:10.644 "name": "BaseBdev3", 00:33:10.644 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:10.644 "is_configured": true, 00:33:10.644 "data_offset": 2048, 00:33:10.644 "data_size": 63488 00:33:10.644 }, 00:33:10.644 { 00:33:10.644 "name": "BaseBdev4", 00:33:10.644 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:10.644 "is_configured": true, 00:33:10.644 "data_offset": 2048, 00:33:10.644 "data_size": 63488 00:33:10.644 } 00:33:10.644 ] 00:33:10.644 }' 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:10.644 18:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:11.213 "name": "raid_bdev1", 00:33:11.213 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:11.213 "strip_size_kb": 0, 00:33:11.213 "state": "online", 00:33:11.213 "raid_level": "raid1", 00:33:11.213 "superblock": true, 00:33:11.213 "num_base_bdevs": 4, 00:33:11.213 "num_base_bdevs_discovered": 2, 00:33:11.213 "num_base_bdevs_operational": 2, 00:33:11.213 "base_bdevs_list": [ 00:33:11.213 { 00:33:11.213 "name": null, 00:33:11.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.213 "is_configured": false, 00:33:11.213 "data_offset": 0, 00:33:11.213 "data_size": 63488 00:33:11.213 }, 00:33:11.213 { 00:33:11.213 "name": null, 00:33:11.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:11.213 "is_configured": false, 00:33:11.213 "data_offset": 2048, 00:33:11.213 "data_size": 63488 00:33:11.213 }, 00:33:11.213 { 00:33:11.213 "name": "BaseBdev3", 00:33:11.213 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:11.213 "is_configured": true, 00:33:11.213 "data_offset": 2048, 00:33:11.213 "data_size": 63488 00:33:11.213 }, 00:33:11.213 { 00:33:11.213 "name": "BaseBdev4", 00:33:11.213 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:11.213 "is_configured": true, 00:33:11.213 "data_offset": 2048, 00:33:11.213 "data_size": 63488 00:33:11.213 } 00:33:11.213 ] 00:33:11.213 }' 00:33:11.213 18:31:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:11.213 [2024-12-06 18:31:42.087513] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:11.213 [2024-12-06 18:31:42.087815] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:33:11.213 [2024-12-06 18:31:42.087832] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:11.213 request: 00:33:11.213 { 00:33:11.213 "base_bdev": "BaseBdev1", 00:33:11.213 "raid_bdev": "raid_bdev1", 00:33:11.213 "method": "bdev_raid_add_base_bdev", 00:33:11.213 "req_id": 1 00:33:11.213 } 00:33:11.213 Got JSON-RPC error response 00:33:11.213 response: 00:33:11.213 { 00:33:11.213 "code": -22, 00:33:11.213 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:11.213 } 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:11.213 18:31:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:12.594 "name": "raid_bdev1", 00:33:12.594 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:12.594 "strip_size_kb": 0, 00:33:12.594 "state": "online", 00:33:12.594 "raid_level": "raid1", 00:33:12.594 "superblock": true, 00:33:12.594 "num_base_bdevs": 4, 00:33:12.594 "num_base_bdevs_discovered": 2, 00:33:12.594 "num_base_bdevs_operational": 2, 00:33:12.594 "base_bdevs_list": [ 00:33:12.594 { 00:33:12.594 "name": null, 00:33:12.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.594 "is_configured": false, 00:33:12.594 "data_offset": 0, 00:33:12.594 "data_size": 63488 00:33:12.594 }, 00:33:12.594 { 00:33:12.594 "name": null, 00:33:12.594 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.594 "is_configured": false, 00:33:12.594 "data_offset": 2048, 00:33:12.594 "data_size": 63488 00:33:12.594 }, 00:33:12.594 { 00:33:12.594 "name": "BaseBdev3", 00:33:12.594 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:12.594 "is_configured": true, 00:33:12.594 "data_offset": 2048, 00:33:12.594 "data_size": 63488 00:33:12.594 }, 00:33:12.594 { 00:33:12.594 "name": "BaseBdev4", 00:33:12.594 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:12.594 "is_configured": true, 00:33:12.594 "data_offset": 2048, 00:33:12.594 "data_size": 63488 00:33:12.594 } 00:33:12.594 ] 00:33:12.594 }' 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:12.594 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:12.853 "name": "raid_bdev1", 00:33:12.853 "uuid": "bb1cbb05-aa6f-4ce1-8ce6-2c03f884fe75", 00:33:12.853 "strip_size_kb": 0, 00:33:12.853 "state": "online", 00:33:12.853 "raid_level": "raid1", 00:33:12.853 "superblock": true, 00:33:12.853 "num_base_bdevs": 4, 00:33:12.853 "num_base_bdevs_discovered": 2, 00:33:12.853 "num_base_bdevs_operational": 2, 00:33:12.853 "base_bdevs_list": [ 00:33:12.853 { 00:33:12.853 "name": null, 00:33:12.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.853 "is_configured": false, 00:33:12.853 "data_offset": 0, 00:33:12.853 "data_size": 63488 00:33:12.853 }, 00:33:12.853 { 00:33:12.853 "name": null, 00:33:12.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:12.853 "is_configured": false, 00:33:12.853 "data_offset": 2048, 00:33:12.853 "data_size": 63488 00:33:12.853 }, 00:33:12.853 { 00:33:12.853 "name": "BaseBdev3", 00:33:12.853 "uuid": "4198488f-a494-5de1-9e90-f9ab041f7ea0", 00:33:12.853 "is_configured": true, 00:33:12.853 "data_offset": 2048, 00:33:12.853 "data_size": 63488 00:33:12.853 }, 00:33:12.853 { 00:33:12.853 "name": "BaseBdev4", 00:33:12.853 "uuid": "2e6c8ce5-0fd0-5812-9ec0-8a28f5096409", 00:33:12.853 "is_configured": true, 00:33:12.853 "data_offset": 2048, 00:33:12.853 "data_size": 63488 00:33:12.853 } 00:33:12.853 ] 00:33:12.853 }' 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 77735 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 77735 ']' 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 77735 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77735 00:33:12.853 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:12.853 killing process with pid 77735 00:33:12.853 Received shutdown signal, test time was about 60.000000 seconds 00:33:12.853 00:33:12.853 Latency(us) 00:33:12.853 [2024-12-06T18:31:43.802Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:12.853 [2024-12-06T18:31:43.803Z] =================================================================================================================== 00:33:12.854 [2024-12-06T18:31:43.803Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:12.854 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:12.854 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77735' 00:33:12.854 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 77735 00:33:12.854 [2024-12-06 18:31:43.690100] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:12.854 18:31:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 77735 00:33:12.854 [2024-12-06 18:31:43.690277] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:12.854 [2024-12-06 18:31:43.690362] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:12.854 [2024-12-06 18:31:43.690376] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:13.423 [2024-12-06 18:31:44.208020] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:33:14.802 00:33:14.802 real 0m26.194s 00:33:14.802 user 0m30.618s 00:33:14.802 sys 0m4.727s 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:33:14.802 ************************************ 00:33:14.802 END TEST raid_rebuild_test_sb 00:33:14.802 ************************************ 00:33:14.802 18:31:45 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:33:14.802 18:31:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:14.802 18:31:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:14.802 18:31:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:14.802 ************************************ 00:33:14.802 START TEST raid_rebuild_test_io 00:33:14.802 ************************************ 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78500 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78500 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78500 ']' 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.802 18:31:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:14.802 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:14.802 Zero copy mechanism will not be used. 00:33:14.802 [2024-12-06 18:31:45.619560] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:14.802 [2024-12-06 18:31:45.619709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78500 ] 00:33:15.060 [2024-12-06 18:31:45.804333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.060 [2024-12-06 18:31:45.947554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.318 [2024-12-06 18:31:46.197217] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:15.318 [2024-12-06 18:31:46.197271] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:15.576 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.576 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:33:15.576 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:15.576 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:15.576 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.577 BaseBdev1_malloc 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.577 [2024-12-06 18:31:46.506501] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:15.577 [2024-12-06 18:31:46.506733] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:15.577 [2024-12-06 18:31:46.506799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:15.577 [2024-12-06 18:31:46.506898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:15.577 [2024-12-06 18:31:46.509711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:15.577 [2024-12-06 18:31:46.509755] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:15.577 BaseBdev1 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.577 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 BaseBdev2_malloc 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 [2024-12-06 18:31:46.570757] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:15.836 [2024-12-06 18:31:46.570964] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:15.836 [2024-12-06 18:31:46.571008] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:15.836 [2024-12-06 18:31:46.571024] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:15.836 [2024-12-06 18:31:46.573816] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:15.836 [2024-12-06 18:31:46.573860] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:15.836 BaseBdev2 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 BaseBdev3_malloc 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 [2024-12-06 18:31:46.646803] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:15.836 [2024-12-06 18:31:46.647011] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:15.836 [2024-12-06 18:31:46.647077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:15.836 [2024-12-06 18:31:46.647198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:15.836 [2024-12-06 18:31:46.650003] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:15.836 [2024-12-06 18:31:46.650051] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:15.836 BaseBdev3 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 BaseBdev4_malloc 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 [2024-12-06 18:31:46.710920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:15.836 [2024-12-06 18:31:46.711110] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:15.836 [2024-12-06 18:31:46.711179] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:15.836 [2024-12-06 18:31:46.711274] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:15.836 [2024-12-06 18:31:46.713951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:15.836 [2024-12-06 18:31:46.714104] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:15.836 BaseBdev4 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 spare_malloc 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:15.836 spare_delay 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.836 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.096 [2024-12-06 18:31:46.786417] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:16.096 [2024-12-06 18:31:46.786481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:16.096 [2024-12-06 18:31:46.786503] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:16.096 [2024-12-06 18:31:46.786518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:16.096 [2024-12-06 18:31:46.789196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:16.096 [2024-12-06 18:31:46.789237] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:16.096 spare 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.096 [2024-12-06 18:31:46.798449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:16.096 [2024-12-06 18:31:46.800937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:16.096 [2024-12-06 18:31:46.801137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:16.096 [2024-12-06 18:31:46.801265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:16.096 [2024-12-06 18:31:46.801460] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:16.096 [2024-12-06 18:31:46.801560] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:33:16.096 [2024-12-06 18:31:46.801895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:16.096 [2024-12-06 18:31:46.802224] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:16.096 [2024-12-06 18:31:46.802322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:16.096 [2024-12-06 18:31:46.802553] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.096 "name": "raid_bdev1", 00:33:16.096 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:16.096 "strip_size_kb": 0, 00:33:16.096 "state": "online", 00:33:16.096 "raid_level": "raid1", 00:33:16.096 "superblock": false, 00:33:16.096 "num_base_bdevs": 4, 00:33:16.096 "num_base_bdevs_discovered": 4, 00:33:16.096 "num_base_bdevs_operational": 4, 00:33:16.096 "base_bdevs_list": [ 00:33:16.096 { 00:33:16.096 "name": "BaseBdev1", 00:33:16.096 "uuid": "dccbf780-b72a-549f-9662-573d999f73f5", 00:33:16.096 "is_configured": true, 00:33:16.096 "data_offset": 0, 00:33:16.096 "data_size": 65536 00:33:16.096 }, 00:33:16.096 { 00:33:16.096 "name": "BaseBdev2", 00:33:16.096 "uuid": "79b7db87-6779-5fda-952f-a4ad91ef6143", 00:33:16.096 "is_configured": true, 00:33:16.096 "data_offset": 0, 00:33:16.096 "data_size": 65536 00:33:16.096 }, 00:33:16.096 { 00:33:16.096 "name": "BaseBdev3", 00:33:16.096 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:16.096 "is_configured": true, 00:33:16.096 "data_offset": 0, 00:33:16.096 "data_size": 65536 00:33:16.096 }, 00:33:16.096 { 00:33:16.096 "name": "BaseBdev4", 00:33:16.096 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:16.096 "is_configured": true, 00:33:16.096 "data_offset": 0, 00:33:16.096 "data_size": 65536 00:33:16.096 } 00:33:16.096 ] 00:33:16.096 }' 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.096 18:31:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:16.356 [2024-12-06 18:31:47.206518] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.356 [2024-12-06 18:31:47.297958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:16.356 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:16.616 "name": "raid_bdev1", 00:33:16.616 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:16.616 "strip_size_kb": 0, 00:33:16.616 "state": "online", 00:33:16.616 "raid_level": "raid1", 00:33:16.616 "superblock": false, 00:33:16.616 "num_base_bdevs": 4, 00:33:16.616 "num_base_bdevs_discovered": 3, 00:33:16.616 "num_base_bdevs_operational": 3, 00:33:16.616 "base_bdevs_list": [ 00:33:16.616 { 00:33:16.616 "name": null, 00:33:16.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:16.616 "is_configured": false, 00:33:16.616 "data_offset": 0, 00:33:16.616 "data_size": 65536 00:33:16.616 }, 00:33:16.616 { 00:33:16.616 "name": "BaseBdev2", 00:33:16.616 "uuid": "79b7db87-6779-5fda-952f-a4ad91ef6143", 00:33:16.616 "is_configured": true, 00:33:16.616 "data_offset": 0, 00:33:16.616 "data_size": 65536 00:33:16.616 }, 00:33:16.616 { 00:33:16.616 "name": "BaseBdev3", 00:33:16.616 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:16.616 "is_configured": true, 00:33:16.616 "data_offset": 0, 00:33:16.616 "data_size": 65536 00:33:16.616 }, 00:33:16.616 { 00:33:16.616 "name": "BaseBdev4", 00:33:16.616 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:16.616 "is_configured": true, 00:33:16.616 "data_offset": 0, 00:33:16.616 "data_size": 65536 00:33:16.616 } 00:33:16.616 ] 00:33:16.616 }' 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:16.616 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.616 [2024-12-06 18:31:47.380080] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:16.616 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:16.616 Zero copy mechanism will not be used. 00:33:16.616 Running I/O for 60 seconds... 00:33:16.876 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:16.876 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.876 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:16.876 [2024-12-06 18:31:47.715934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:16.876 18:31:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.876 18:31:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:16.876 [2024-12-06 18:31:47.793631] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:33:16.876 [2024-12-06 18:31:47.796258] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:17.135 [2024-12-06 18:31:47.906233] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:17.135 [2024-12-06 18:31:47.907049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:17.135 [2024-12-06 18:31:48.029071] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:17.135 [2024-12-06 18:31:48.030326] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:17.703 [2024-12-06 18:31:48.361541] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:17.703 [2024-12-06 18:31:48.362608] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:17.703 111.00 IOPS, 333.00 MiB/s [2024-12-06T18:31:48.652Z] [2024-12-06 18:31:48.493294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:17.703 [2024-12-06 18:31:48.493937] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.963 [2024-12-06 18:31:48.823107] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:17.963 [2024-12-06 18:31:48.823620] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:17.963 "name": "raid_bdev1", 00:33:17.963 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:17.963 "strip_size_kb": 0, 00:33:17.963 "state": "online", 00:33:17.963 "raid_level": "raid1", 00:33:17.963 "superblock": false, 00:33:17.963 "num_base_bdevs": 4, 00:33:17.963 "num_base_bdevs_discovered": 4, 00:33:17.963 "num_base_bdevs_operational": 4, 00:33:17.963 "process": { 00:33:17.963 "type": "rebuild", 00:33:17.963 "target": "spare", 00:33:17.963 "progress": { 00:33:17.963 "blocks": 12288, 00:33:17.963 "percent": 18 00:33:17.963 } 00:33:17.963 }, 00:33:17.963 "base_bdevs_list": [ 00:33:17.963 { 00:33:17.963 "name": "spare", 00:33:17.963 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:17.963 "is_configured": true, 00:33:17.963 "data_offset": 0, 00:33:17.963 "data_size": 65536 00:33:17.963 }, 00:33:17.963 { 00:33:17.963 "name": "BaseBdev2", 00:33:17.963 "uuid": "79b7db87-6779-5fda-952f-a4ad91ef6143", 00:33:17.963 "is_configured": true, 00:33:17.963 "data_offset": 0, 00:33:17.963 "data_size": 65536 00:33:17.963 }, 00:33:17.963 { 00:33:17.963 "name": "BaseBdev3", 00:33:17.963 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:17.963 "is_configured": true, 00:33:17.963 "data_offset": 0, 00:33:17.963 "data_size": 65536 00:33:17.963 }, 00:33:17.963 { 00:33:17.963 "name": "BaseBdev4", 00:33:17.963 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:17.963 "is_configured": true, 00:33:17.963 "data_offset": 0, 00:33:17.963 "data_size": 65536 00:33:17.963 } 00:33:17.963 ] 00:33:17.963 }' 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.963 18:31:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:17.963 [2024-12-06 18:31:48.895945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:18.223 [2024-12-06 18:31:49.045383] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:18.223 [2024-12-06 18:31:49.064533] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:18.223 [2024-12-06 18:31:49.064726] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:18.223 [2024-12-06 18:31:49.064762] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:18.223 [2024-12-06 18:31:49.096290] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:18.223 "name": "raid_bdev1", 00:33:18.223 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:18.223 "strip_size_kb": 0, 00:33:18.223 "state": "online", 00:33:18.223 "raid_level": "raid1", 00:33:18.223 "superblock": false, 00:33:18.223 "num_base_bdevs": 4, 00:33:18.223 "num_base_bdevs_discovered": 3, 00:33:18.223 "num_base_bdevs_operational": 3, 00:33:18.223 "base_bdevs_list": [ 00:33:18.223 { 00:33:18.223 "name": null, 00:33:18.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.223 "is_configured": false, 00:33:18.223 "data_offset": 0, 00:33:18.223 "data_size": 65536 00:33:18.223 }, 00:33:18.223 { 00:33:18.223 "name": "BaseBdev2", 00:33:18.223 "uuid": "79b7db87-6779-5fda-952f-a4ad91ef6143", 00:33:18.223 "is_configured": true, 00:33:18.223 "data_offset": 0, 00:33:18.223 "data_size": 65536 00:33:18.223 }, 00:33:18.223 { 00:33:18.223 "name": "BaseBdev3", 00:33:18.223 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:18.223 "is_configured": true, 00:33:18.223 "data_offset": 0, 00:33:18.223 "data_size": 65536 00:33:18.223 }, 00:33:18.223 { 00:33:18.223 "name": "BaseBdev4", 00:33:18.223 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:18.223 "is_configured": true, 00:33:18.223 "data_offset": 0, 00:33:18.223 "data_size": 65536 00:33:18.223 } 00:33:18.223 ] 00:33:18.223 }' 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:18.223 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:18.741 113.50 IOPS, 340.50 MiB/s [2024-12-06T18:31:49.690Z] 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.741 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:18.741 "name": "raid_bdev1", 00:33:18.741 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:18.741 "strip_size_kb": 0, 00:33:18.741 "state": "online", 00:33:18.741 "raid_level": "raid1", 00:33:18.741 "superblock": false, 00:33:18.741 "num_base_bdevs": 4, 00:33:18.741 "num_base_bdevs_discovered": 3, 00:33:18.741 "num_base_bdevs_operational": 3, 00:33:18.741 "base_bdevs_list": [ 00:33:18.741 { 00:33:18.741 "name": null, 00:33:18.741 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:18.741 "is_configured": false, 00:33:18.741 "data_offset": 0, 00:33:18.741 "data_size": 65536 00:33:18.741 }, 00:33:18.741 { 00:33:18.741 "name": "BaseBdev2", 00:33:18.741 "uuid": "79b7db87-6779-5fda-952f-a4ad91ef6143", 00:33:18.741 "is_configured": true, 00:33:18.741 "data_offset": 0, 00:33:18.741 "data_size": 65536 00:33:18.741 }, 00:33:18.741 { 00:33:18.741 "name": "BaseBdev3", 00:33:18.741 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:18.741 "is_configured": true, 00:33:18.741 "data_offset": 0, 00:33:18.741 "data_size": 65536 00:33:18.741 }, 00:33:18.741 { 00:33:18.741 "name": "BaseBdev4", 00:33:18.742 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:18.742 "is_configured": true, 00:33:18.742 "data_offset": 0, 00:33:18.742 "data_size": 65536 00:33:18.742 } 00:33:18.742 ] 00:33:18.742 }' 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:18.742 [2024-12-06 18:31:49.619419] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.742 18:31:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:18.742 [2024-12-06 18:31:49.681876] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:18.742 [2024-12-06 18:31:49.684435] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:19.001 [2024-12-06 18:31:49.819504] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:19.001 [2024-12-06 18:31:49.821704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:19.261 [2024-12-06 18:31:50.057417] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:19.261 [2024-12-06 18:31:50.057927] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:19.520 [2024-12-06 18:31:50.281505] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:19.520 [2024-12-06 18:31:50.282298] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:19.778 122.00 IOPS, 366.00 MiB/s [2024-12-06T18:31:50.727Z] [2024-12-06 18:31:50.540847] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:19.778 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:19.778 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:19.778 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:19.778 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:19.779 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:19.779 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:19.779 18:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.779 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:19.779 18:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:19.779 18:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.779 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:19.779 "name": "raid_bdev1", 00:33:19.779 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:19.779 "strip_size_kb": 0, 00:33:19.779 "state": "online", 00:33:19.779 "raid_level": "raid1", 00:33:19.779 "superblock": false, 00:33:19.779 "num_base_bdevs": 4, 00:33:19.779 "num_base_bdevs_discovered": 4, 00:33:19.779 "num_base_bdevs_operational": 4, 00:33:19.779 "process": { 00:33:19.779 "type": "rebuild", 00:33:19.779 "target": "spare", 00:33:19.779 "progress": { 00:33:19.779 "blocks": 10240, 00:33:19.779 "percent": 15 00:33:19.779 } 00:33:19.779 }, 00:33:19.779 "base_bdevs_list": [ 00:33:19.779 { 00:33:19.779 "name": "spare", 00:33:19.779 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:19.779 "is_configured": true, 00:33:19.779 "data_offset": 0, 00:33:19.779 "data_size": 65536 00:33:19.779 }, 00:33:19.779 { 00:33:19.779 "name": "BaseBdev2", 00:33:19.779 "uuid": "79b7db87-6779-5fda-952f-a4ad91ef6143", 00:33:19.779 "is_configured": true, 00:33:19.779 "data_offset": 0, 00:33:19.779 "data_size": 65536 00:33:19.779 }, 00:33:19.779 { 00:33:19.779 "name": "BaseBdev3", 00:33:19.779 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:19.779 "is_configured": true, 00:33:19.779 "data_offset": 0, 00:33:19.779 "data_size": 65536 00:33:19.779 }, 00:33:19.779 { 00:33:19.779 "name": "BaseBdev4", 00:33:19.779 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:19.779 "is_configured": true, 00:33:19.779 "data_offset": 0, 00:33:19.779 "data_size": 65536 00:33:19.779 } 00:33:19.779 ] 00:33:19.779 }' 00:33:19.779 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.037 18:31:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:20.037 [2024-12-06 18:31:50.816408] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:20.297 [2024-12-06 18:31:51.053743] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:33:20.297 [2024-12-06 18:31:51.054046] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:33:20.297 [2024-12-06 18:31:51.059051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:20.297 "name": "raid_bdev1", 00:33:20.297 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:20.297 "strip_size_kb": 0, 00:33:20.297 "state": "online", 00:33:20.297 "raid_level": "raid1", 00:33:20.297 "superblock": false, 00:33:20.297 "num_base_bdevs": 4, 00:33:20.297 "num_base_bdevs_discovered": 3, 00:33:20.297 "num_base_bdevs_operational": 3, 00:33:20.297 "process": { 00:33:20.297 "type": "rebuild", 00:33:20.297 "target": "spare", 00:33:20.297 "progress": { 00:33:20.297 "blocks": 14336, 00:33:20.297 "percent": 21 00:33:20.297 } 00:33:20.297 }, 00:33:20.297 "base_bdevs_list": [ 00:33:20.297 { 00:33:20.297 "name": "spare", 00:33:20.297 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:20.297 "is_configured": true, 00:33:20.297 "data_offset": 0, 00:33:20.297 "data_size": 65536 00:33:20.297 }, 00:33:20.297 { 00:33:20.297 "name": null, 00:33:20.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.297 "is_configured": false, 00:33:20.297 "data_offset": 0, 00:33:20.297 "data_size": 65536 00:33:20.297 }, 00:33:20.297 { 00:33:20.297 "name": "BaseBdev3", 00:33:20.297 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:20.297 "is_configured": true, 00:33:20.297 "data_offset": 0, 00:33:20.297 "data_size": 65536 00:33:20.297 }, 00:33:20.297 { 00:33:20.297 "name": "BaseBdev4", 00:33:20.297 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:20.297 "is_configured": true, 00:33:20.297 "data_offset": 0, 00:33:20.297 "data_size": 65536 00:33:20.297 } 00:33:20.297 ] 00:33:20.297 }' 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:20.297 [2024-12-06 18:31:51.175491] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=488 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:20.297 "name": "raid_bdev1", 00:33:20.297 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:20.297 "strip_size_kb": 0, 00:33:20.297 "state": "online", 00:33:20.297 "raid_level": "raid1", 00:33:20.297 "superblock": false, 00:33:20.297 "num_base_bdevs": 4, 00:33:20.297 "num_base_bdevs_discovered": 3, 00:33:20.297 "num_base_bdevs_operational": 3, 00:33:20.297 "process": { 00:33:20.297 "type": "rebuild", 00:33:20.297 "target": "spare", 00:33:20.297 "progress": { 00:33:20.297 "blocks": 16384, 00:33:20.297 "percent": 25 00:33:20.297 } 00:33:20.297 }, 00:33:20.297 "base_bdevs_list": [ 00:33:20.297 { 00:33:20.297 "name": "spare", 00:33:20.297 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:20.297 "is_configured": true, 00:33:20.297 "data_offset": 0, 00:33:20.297 "data_size": 65536 00:33:20.297 }, 00:33:20.297 { 00:33:20.297 "name": null, 00:33:20.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:20.297 "is_configured": false, 00:33:20.297 "data_offset": 0, 00:33:20.297 "data_size": 65536 00:33:20.297 }, 00:33:20.297 { 00:33:20.297 "name": "BaseBdev3", 00:33:20.297 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:20.297 "is_configured": true, 00:33:20.297 "data_offset": 0, 00:33:20.297 "data_size": 65536 00:33:20.297 }, 00:33:20.297 { 00:33:20.297 "name": "BaseBdev4", 00:33:20.297 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:20.297 "is_configured": true, 00:33:20.297 "data_offset": 0, 00:33:20.297 "data_size": 65536 00:33:20.297 } 00:33:20.297 ] 00:33:20.297 }' 00:33:20.297 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:20.560 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:20.560 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:20.560 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:20.560 18:31:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:21.188 111.25 IOPS, 333.75 MiB/s [2024-12-06T18:31:52.137Z] [2024-12-06 18:31:51.902499] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:33:21.188 [2024-12-06 18:31:52.025022] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:21.446 "name": "raid_bdev1", 00:33:21.446 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:21.446 "strip_size_kb": 0, 00:33:21.446 "state": "online", 00:33:21.446 "raid_level": "raid1", 00:33:21.446 "superblock": false, 00:33:21.446 "num_base_bdevs": 4, 00:33:21.446 "num_base_bdevs_discovered": 3, 00:33:21.446 "num_base_bdevs_operational": 3, 00:33:21.446 "process": { 00:33:21.446 "type": "rebuild", 00:33:21.446 "target": "spare", 00:33:21.446 "progress": { 00:33:21.446 "blocks": 30720, 00:33:21.446 "percent": 46 00:33:21.446 } 00:33:21.446 }, 00:33:21.446 "base_bdevs_list": [ 00:33:21.446 { 00:33:21.446 "name": "spare", 00:33:21.446 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:21.446 "is_configured": true, 00:33:21.446 "data_offset": 0, 00:33:21.446 "data_size": 65536 00:33:21.446 }, 00:33:21.446 { 00:33:21.446 "name": null, 00:33:21.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:21.446 "is_configured": false, 00:33:21.446 "data_offset": 0, 00:33:21.446 "data_size": 65536 00:33:21.446 }, 00:33:21.446 { 00:33:21.446 "name": "BaseBdev3", 00:33:21.446 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:21.446 "is_configured": true, 00:33:21.446 "data_offset": 0, 00:33:21.446 "data_size": 65536 00:33:21.446 }, 00:33:21.446 { 00:33:21.446 "name": "BaseBdev4", 00:33:21.446 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:21.446 "is_configured": true, 00:33:21.446 "data_offset": 0, 00:33:21.446 "data_size": 65536 00:33:21.446 } 00:33:21.446 ] 00:33:21.446 }' 00:33:21.446 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:21.446 [2024-12-06 18:31:52.391387] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:33:21.704 99.20 IOPS, 297.60 MiB/s [2024-12-06T18:31:52.653Z] 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:21.704 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:21.704 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:21.704 18:31:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:21.704 [2024-12-06 18:31:52.618693] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:33:22.270 [2024-12-06 18:31:52.954691] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:33:22.270 [2024-12-06 18:31:52.956124] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:33:22.529 90.50 IOPS, 271.50 MiB/s [2024-12-06T18:31:53.478Z] 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.529 18:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:22.789 18:31:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.789 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:22.789 "name": "raid_bdev1", 00:33:22.789 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:22.789 "strip_size_kb": 0, 00:33:22.789 "state": "online", 00:33:22.789 "raid_level": "raid1", 00:33:22.789 "superblock": false, 00:33:22.789 "num_base_bdevs": 4, 00:33:22.789 "num_base_bdevs_discovered": 3, 00:33:22.789 "num_base_bdevs_operational": 3, 00:33:22.789 "process": { 00:33:22.789 "type": "rebuild", 00:33:22.789 "target": "spare", 00:33:22.789 "progress": { 00:33:22.789 "blocks": 45056, 00:33:22.789 "percent": 68 00:33:22.789 } 00:33:22.789 }, 00:33:22.789 "base_bdevs_list": [ 00:33:22.789 { 00:33:22.789 "name": "spare", 00:33:22.789 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:22.789 "is_configured": true, 00:33:22.789 "data_offset": 0, 00:33:22.789 "data_size": 65536 00:33:22.789 }, 00:33:22.789 { 00:33:22.789 "name": null, 00:33:22.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:22.789 "is_configured": false, 00:33:22.789 "data_offset": 0, 00:33:22.789 "data_size": 65536 00:33:22.789 }, 00:33:22.789 { 00:33:22.789 "name": "BaseBdev3", 00:33:22.789 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:22.789 "is_configured": true, 00:33:22.789 "data_offset": 0, 00:33:22.789 "data_size": 65536 00:33:22.789 }, 00:33:22.789 { 00:33:22.789 "name": "BaseBdev4", 00:33:22.789 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:22.789 "is_configured": true, 00:33:22.789 "data_offset": 0, 00:33:22.789 "data_size": 65536 00:33:22.789 } 00:33:22.789 ] 00:33:22.789 }' 00:33:22.789 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:22.789 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:22.789 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:22.789 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:22.789 18:31:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:23.725 81.29 IOPS, 243.86 MiB/s [2024-12-06T18:31:54.674Z] [2024-12-06 18:31:54.476242] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:23.725 [2024-12-06 18:31:54.576059] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:23.725 [2024-12-06 18:31:54.579923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:23.725 "name": "raid_bdev1", 00:33:23.725 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:23.725 "strip_size_kb": 0, 00:33:23.725 "state": "online", 00:33:23.725 "raid_level": "raid1", 00:33:23.725 "superblock": false, 00:33:23.725 "num_base_bdevs": 4, 00:33:23.725 "num_base_bdevs_discovered": 3, 00:33:23.725 "num_base_bdevs_operational": 3, 00:33:23.725 "base_bdevs_list": [ 00:33:23.725 { 00:33:23.725 "name": "spare", 00:33:23.725 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:23.725 "is_configured": true, 00:33:23.725 "data_offset": 0, 00:33:23.725 "data_size": 65536 00:33:23.725 }, 00:33:23.725 { 00:33:23.725 "name": null, 00:33:23.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.725 "is_configured": false, 00:33:23.725 "data_offset": 0, 00:33:23.725 "data_size": 65536 00:33:23.725 }, 00:33:23.725 { 00:33:23.725 "name": "BaseBdev3", 00:33:23.725 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:23.725 "is_configured": true, 00:33:23.725 "data_offset": 0, 00:33:23.725 "data_size": 65536 00:33:23.725 }, 00:33:23.725 { 00:33:23.725 "name": "BaseBdev4", 00:33:23.725 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:23.725 "is_configured": true, 00:33:23.725 "data_offset": 0, 00:33:23.725 "data_size": 65536 00:33:23.725 } 00:33:23.725 ] 00:33:23.725 }' 00:33:23.725 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.984 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:23.984 "name": "raid_bdev1", 00:33:23.984 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:23.984 "strip_size_kb": 0, 00:33:23.984 "state": "online", 00:33:23.984 "raid_level": "raid1", 00:33:23.984 "superblock": false, 00:33:23.984 "num_base_bdevs": 4, 00:33:23.984 "num_base_bdevs_discovered": 3, 00:33:23.984 "num_base_bdevs_operational": 3, 00:33:23.984 "base_bdevs_list": [ 00:33:23.984 { 00:33:23.984 "name": "spare", 00:33:23.984 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:23.984 "is_configured": true, 00:33:23.984 "data_offset": 0, 00:33:23.984 "data_size": 65536 00:33:23.984 }, 00:33:23.984 { 00:33:23.984 "name": null, 00:33:23.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.984 "is_configured": false, 00:33:23.984 "data_offset": 0, 00:33:23.984 "data_size": 65536 00:33:23.984 }, 00:33:23.984 { 00:33:23.984 "name": "BaseBdev3", 00:33:23.984 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:23.984 "is_configured": true, 00:33:23.984 "data_offset": 0, 00:33:23.984 "data_size": 65536 00:33:23.984 }, 00:33:23.984 { 00:33:23.984 "name": "BaseBdev4", 00:33:23.984 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:23.985 "is_configured": true, 00:33:23.985 "data_offset": 0, 00:33:23.985 "data_size": 65536 00:33:23.985 } 00:33:23.985 ] 00:33:23.985 }' 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:23.985 "name": "raid_bdev1", 00:33:23.985 "uuid": "312d4b0c-cf12-4704-8ed3-f3e815d44829", 00:33:23.985 "strip_size_kb": 0, 00:33:23.985 "state": "online", 00:33:23.985 "raid_level": "raid1", 00:33:23.985 "superblock": false, 00:33:23.985 "num_base_bdevs": 4, 00:33:23.985 "num_base_bdevs_discovered": 3, 00:33:23.985 "num_base_bdevs_operational": 3, 00:33:23.985 "base_bdevs_list": [ 00:33:23.985 { 00:33:23.985 "name": "spare", 00:33:23.985 "uuid": "c542617d-2484-54fc-b564-8a58835e039a", 00:33:23.985 "is_configured": true, 00:33:23.985 "data_offset": 0, 00:33:23.985 "data_size": 65536 00:33:23.985 }, 00:33:23.985 { 00:33:23.985 "name": null, 00:33:23.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:23.985 "is_configured": false, 00:33:23.985 "data_offset": 0, 00:33:23.985 "data_size": 65536 00:33:23.985 }, 00:33:23.985 { 00:33:23.985 "name": "BaseBdev3", 00:33:23.985 "uuid": "3657321d-6a33-50cb-a3ec-8580d8d06346", 00:33:23.985 "is_configured": true, 00:33:23.985 "data_offset": 0, 00:33:23.985 "data_size": 65536 00:33:23.985 }, 00:33:23.985 { 00:33:23.985 "name": "BaseBdev4", 00:33:23.985 "uuid": "9828a73c-b312-5c1f-84d1-0ea210c9fa90", 00:33:23.985 "is_configured": true, 00:33:23.985 "data_offset": 0, 00:33:23.985 "data_size": 65536 00:33:23.985 } 00:33:23.985 ] 00:33:23.985 }' 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:23.985 18:31:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:24.552 [2024-12-06 18:31:55.243344] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:24.552 [2024-12-06 18:31:55.243381] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:24.552 00:33:24.552 Latency(us) 00:33:24.552 [2024-12-06T18:31:55.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.552 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:33:24.552 raid_bdev1 : 7.97 75.83 227.49 0.00 0.00 18652.40 342.16 119596.62 00:33:24.552 [2024-12-06T18:31:55.501Z] =================================================================================================================== 00:33:24.552 [2024-12-06T18:31:55.501Z] Total : 75.83 227.49 0.00 0.00 18652.40 342.16 119596.62 00:33:24.552 [2024-12-06 18:31:55.356846] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:24.552 [2024-12-06 18:31:55.356925] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:24.552 [2024-12-06 18:31:55.357030] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:24.552 [2024-12-06 18:31:55.357043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:24.552 { 00:33:24.552 "results": [ 00:33:24.552 { 00:33:24.552 "job": "raid_bdev1", 00:33:24.552 "core_mask": "0x1", 00:33:24.552 "workload": "randrw", 00:33:24.552 "percentage": 50, 00:33:24.552 "status": "finished", 00:33:24.552 "queue_depth": 2, 00:33:24.552 "io_size": 3145728, 00:33:24.552 "runtime": 7.965255, 00:33:24.552 "iops": 75.82933628615783, 00:33:24.552 "mibps": 227.48800885847348, 00:33:24.552 "io_failed": 0, 00:33:24.552 "io_timeout": 0, 00:33:24.552 "avg_latency_us": 18652.4041596851, 00:33:24.552 "min_latency_us": 342.1558232931727, 00:33:24.552 "max_latency_us": 119596.62008032129 00:33:24.552 } 00:33:24.552 ], 00:33:24.552 "core_count": 1 00:33:24.552 } 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:24.552 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:33:24.811 /dev/nbd0 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:24.811 1+0 records in 00:33:24.811 1+0 records out 00:33:24.811 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355222 s, 11.5 MB/s 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:24.811 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:33:25.070 /dev/nbd1 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.070 1+0 records in 00:33:25.070 1+0 records out 00:33:25.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505221 s, 8.1 MB/s 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:25.070 18:31:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:25.328 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:33:25.328 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:25.328 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:25.328 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:25.328 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:25.328 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:25.328 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:33:25.586 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:25.587 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:25.587 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:33:25.845 /dev/nbd1 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:25.845 1+0 records in 00:33:25.845 1+0 records out 00:33:25.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041831 s, 9.8 MB/s 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:25.845 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:26.103 18:31:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78500 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78500 ']' 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78500 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78500 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:26.362 killing process with pid 78500 00:33:26.362 Received shutdown signal, test time was about 9.903485 seconds 00:33:26.362 00:33:26.362 Latency(us) 00:33:26.362 [2024-12-06T18:31:57.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.362 [2024-12-06T18:31:57.311Z] =================================================================================================================== 00:33:26.362 [2024-12-06T18:31:57.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78500' 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78500 00:33:26.362 [2024-12-06 18:31:57.270236] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:26.362 18:31:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78500 00:33:26.928 [2024-12-06 18:31:57.713311] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:33:28.325 00:33:28.325 real 0m13.510s 00:33:28.325 user 0m16.437s 00:33:28.325 sys 0m2.248s 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:33:28.325 ************************************ 00:33:28.325 END TEST raid_rebuild_test_io 00:33:28.325 ************************************ 00:33:28.325 18:31:59 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:33:28.325 18:31:59 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:33:28.325 18:31:59 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:28.325 18:31:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:28.325 ************************************ 00:33:28.325 START TEST raid_rebuild_test_sb_io 00:33:28.325 ************************************ 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78909 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78909 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 78909 ']' 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:28.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:28.325 18:31:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:28.325 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:28.325 Zero copy mechanism will not be used. 00:33:28.325 [2024-12-06 18:31:59.199019] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:28.325 [2024-12-06 18:31:59.199188] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78909 ] 00:33:28.607 [2024-12-06 18:31:59.386682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.607 [2024-12-06 18:31:59.523705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.867 [2024-12-06 18:31:59.757632] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:28.867 [2024-12-06 18:31:59.757715] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:29.127 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:29.127 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:33:29.127 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:29.127 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:33:29.127 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.127 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.387 BaseBdev1_malloc 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.387 [2024-12-06 18:32:00.091273] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:29.387 [2024-12-06 18:32:00.091352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.387 [2024-12-06 18:32:00.091381] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:33:29.387 [2024-12-06 18:32:00.091397] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.387 [2024-12-06 18:32:00.094193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.387 [2024-12-06 18:32:00.094355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:29.387 BaseBdev1 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.387 BaseBdev2_malloc 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.387 [2024-12-06 18:32:00.157742] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:33:29.387 [2024-12-06 18:32:00.157937] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.387 [2024-12-06 18:32:00.157975] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:33:29.387 [2024-12-06 18:32:00.157992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.387 [2024-12-06 18:32:00.160665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.387 [2024-12-06 18:32:00.160709] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:33:29.387 BaseBdev2 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.387 BaseBdev3_malloc 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.387 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.387 [2024-12-06 18:32:00.235008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:33:29.387 [2024-12-06 18:32:00.235076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.388 [2024-12-06 18:32:00.235103] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:33:29.388 [2024-12-06 18:32:00.235119] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.388 [2024-12-06 18:32:00.237757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.388 [2024-12-06 18:32:00.237802] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:33:29.388 BaseBdev3 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.388 BaseBdev4_malloc 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.388 [2024-12-06 18:32:00.298989] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:33:29.388 [2024-12-06 18:32:00.299063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.388 [2024-12-06 18:32:00.299089] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:33:29.388 [2024-12-06 18:32:00.299105] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.388 [2024-12-06 18:32:00.301781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.388 [2024-12-06 18:32:00.301828] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:33:29.388 BaseBdev4 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.388 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.648 spare_malloc 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.648 spare_delay 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.648 [2024-12-06 18:32:00.374002] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:29.648 [2024-12-06 18:32:00.374062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:29.648 [2024-12-06 18:32:00.374100] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:33:29.648 [2024-12-06 18:32:00.374115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:29.648 [2024-12-06 18:32:00.376746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:29.648 [2024-12-06 18:32:00.376791] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:29.648 spare 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.648 [2024-12-06 18:32:00.386048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:29.648 [2024-12-06 18:32:00.388396] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:29.648 [2024-12-06 18:32:00.388460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:29.648 [2024-12-06 18:32:00.388514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:29.648 [2024-12-06 18:32:00.388701] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:33:29.648 [2024-12-06 18:32:00.388719] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:29.648 [2024-12-06 18:32:00.388999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:33:29.648 [2024-12-06 18:32:00.389219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:33:29.648 [2024-12-06 18:32:00.389231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:33:29.648 [2024-12-06 18:32:00.389378] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:29.648 "name": "raid_bdev1", 00:33:29.648 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:29.648 "strip_size_kb": 0, 00:33:29.648 "state": "online", 00:33:29.648 "raid_level": "raid1", 00:33:29.648 "superblock": true, 00:33:29.648 "num_base_bdevs": 4, 00:33:29.648 "num_base_bdevs_discovered": 4, 00:33:29.648 "num_base_bdevs_operational": 4, 00:33:29.648 "base_bdevs_list": [ 00:33:29.648 { 00:33:29.648 "name": "BaseBdev1", 00:33:29.648 "uuid": "43f9cf34-77f6-5cd3-84e8-9d482a988644", 00:33:29.648 "is_configured": true, 00:33:29.648 "data_offset": 2048, 00:33:29.648 "data_size": 63488 00:33:29.648 }, 00:33:29.648 { 00:33:29.648 "name": "BaseBdev2", 00:33:29.648 "uuid": "f6b5f3f3-65a8-56f5-bf08-dd56bc140488", 00:33:29.648 "is_configured": true, 00:33:29.648 "data_offset": 2048, 00:33:29.648 "data_size": 63488 00:33:29.648 }, 00:33:29.648 { 00:33:29.648 "name": "BaseBdev3", 00:33:29.648 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:29.648 "is_configured": true, 00:33:29.648 "data_offset": 2048, 00:33:29.648 "data_size": 63488 00:33:29.648 }, 00:33:29.648 { 00:33:29.648 "name": "BaseBdev4", 00:33:29.648 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:29.648 "is_configured": true, 00:33:29.648 "data_offset": 2048, 00:33:29.648 "data_size": 63488 00:33:29.648 } 00:33:29.648 ] 00:33:29.648 }' 00:33:29.648 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:29.649 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:33:29.908 [2024-12-06 18:32:00.761885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:29.908 [2024-12-06 18:32:00.833375] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:29.908 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:29.909 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:30.169 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.169 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:30.169 "name": "raid_bdev1", 00:33:30.169 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:30.169 "strip_size_kb": 0, 00:33:30.169 "state": "online", 00:33:30.169 "raid_level": "raid1", 00:33:30.169 "superblock": true, 00:33:30.169 "num_base_bdevs": 4, 00:33:30.169 "num_base_bdevs_discovered": 3, 00:33:30.169 "num_base_bdevs_operational": 3, 00:33:30.169 "base_bdevs_list": [ 00:33:30.169 { 00:33:30.169 "name": null, 00:33:30.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:30.169 "is_configured": false, 00:33:30.169 "data_offset": 0, 00:33:30.169 "data_size": 63488 00:33:30.169 }, 00:33:30.169 { 00:33:30.169 "name": "BaseBdev2", 00:33:30.169 "uuid": "f6b5f3f3-65a8-56f5-bf08-dd56bc140488", 00:33:30.169 "is_configured": true, 00:33:30.169 "data_offset": 2048, 00:33:30.169 "data_size": 63488 00:33:30.169 }, 00:33:30.169 { 00:33:30.169 "name": "BaseBdev3", 00:33:30.169 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:30.169 "is_configured": true, 00:33:30.169 "data_offset": 2048, 00:33:30.169 "data_size": 63488 00:33:30.169 }, 00:33:30.169 { 00:33:30.169 "name": "BaseBdev4", 00:33:30.169 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:30.169 "is_configured": true, 00:33:30.169 "data_offset": 2048, 00:33:30.169 "data_size": 63488 00:33:30.169 } 00:33:30.169 ] 00:33:30.169 }' 00:33:30.169 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:30.169 18:32:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:30.169 [2024-12-06 18:32:00.930736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:30.169 I/O size of 3145728 is greater than zero copy threshold (65536). 00:33:30.169 Zero copy mechanism will not be used. 00:33:30.169 Running I/O for 60 seconds... 00:33:30.428 18:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:30.428 18:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.428 18:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:30.428 [2024-12-06 18:32:01.240983] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:30.428 18:32:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.428 18:32:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:33:30.428 [2024-12-06 18:32:01.298278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:33:30.428 [2024-12-06 18:32:01.301044] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:30.687 [2024-12-06 18:32:01.424437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:30.687 [2024-12-06 18:32:01.427020] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:30.947 [2024-12-06 18:32:01.680531] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:31.205 146.00 IOPS, 438.00 MiB/s [2024-12-06T18:32:02.154Z] [2024-12-06 18:32:02.042208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:33:31.464 [2024-12-06 18:32:02.284618] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.464 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:31.464 "name": "raid_bdev1", 00:33:31.464 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:31.464 "strip_size_kb": 0, 00:33:31.464 "state": "online", 00:33:31.464 "raid_level": "raid1", 00:33:31.464 "superblock": true, 00:33:31.464 "num_base_bdevs": 4, 00:33:31.464 "num_base_bdevs_discovered": 4, 00:33:31.464 "num_base_bdevs_operational": 4, 00:33:31.464 "process": { 00:33:31.464 "type": "rebuild", 00:33:31.464 "target": "spare", 00:33:31.464 "progress": { 00:33:31.464 "blocks": 10240, 00:33:31.464 "percent": 16 00:33:31.464 } 00:33:31.464 }, 00:33:31.465 "base_bdevs_list": [ 00:33:31.465 { 00:33:31.465 "name": "spare", 00:33:31.465 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:31.465 "is_configured": true, 00:33:31.465 "data_offset": 2048, 00:33:31.465 "data_size": 63488 00:33:31.465 }, 00:33:31.465 { 00:33:31.465 "name": "BaseBdev2", 00:33:31.465 "uuid": "f6b5f3f3-65a8-56f5-bf08-dd56bc140488", 00:33:31.465 "is_configured": true, 00:33:31.465 "data_offset": 2048, 00:33:31.465 "data_size": 63488 00:33:31.465 }, 00:33:31.465 { 00:33:31.465 "name": "BaseBdev3", 00:33:31.465 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:31.465 "is_configured": true, 00:33:31.465 "data_offset": 2048, 00:33:31.465 "data_size": 63488 00:33:31.465 }, 00:33:31.465 { 00:33:31.465 "name": "BaseBdev4", 00:33:31.465 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:31.465 "is_configured": true, 00:33:31.465 "data_offset": 2048, 00:33:31.465 "data_size": 63488 00:33:31.465 } 00:33:31.465 ] 00:33:31.465 }' 00:33:31.465 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:31.465 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:31.465 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:31.722 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:31.722 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:31.722 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.722 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:31.722 [2024-12-06 18:32:02.445262] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:31.722 [2024-12-06 18:32:02.612204] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:31.722 [2024-12-06 18:32:02.625464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:31.722 [2024-12-06 18:32:02.625547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:31.722 [2024-12-06 18:32:02.625566] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:31.722 [2024-12-06 18:32:02.659030] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:31.981 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:31.982 "name": "raid_bdev1", 00:33:31.982 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:31.982 "strip_size_kb": 0, 00:33:31.982 "state": "online", 00:33:31.982 "raid_level": "raid1", 00:33:31.982 "superblock": true, 00:33:31.982 "num_base_bdevs": 4, 00:33:31.982 "num_base_bdevs_discovered": 3, 00:33:31.982 "num_base_bdevs_operational": 3, 00:33:31.982 "base_bdevs_list": [ 00:33:31.982 { 00:33:31.982 "name": null, 00:33:31.982 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:31.982 "is_configured": false, 00:33:31.982 "data_offset": 0, 00:33:31.982 "data_size": 63488 00:33:31.982 }, 00:33:31.982 { 00:33:31.982 "name": "BaseBdev2", 00:33:31.982 "uuid": "f6b5f3f3-65a8-56f5-bf08-dd56bc140488", 00:33:31.982 "is_configured": true, 00:33:31.982 "data_offset": 2048, 00:33:31.982 "data_size": 63488 00:33:31.982 }, 00:33:31.982 { 00:33:31.982 "name": "BaseBdev3", 00:33:31.982 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:31.982 "is_configured": true, 00:33:31.982 "data_offset": 2048, 00:33:31.982 "data_size": 63488 00:33:31.982 }, 00:33:31.982 { 00:33:31.982 "name": "BaseBdev4", 00:33:31.982 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:31.982 "is_configured": true, 00:33:31.982 "data_offset": 2048, 00:33:31.982 "data_size": 63488 00:33:31.982 } 00:33:31.982 ] 00:33:31.982 }' 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:31.982 18:32:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:32.241 150.00 IOPS, 450.00 MiB/s [2024-12-06T18:32:03.190Z] 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:32.241 "name": "raid_bdev1", 00:33:32.241 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:32.241 "strip_size_kb": 0, 00:33:32.241 "state": "online", 00:33:32.241 "raid_level": "raid1", 00:33:32.241 "superblock": true, 00:33:32.241 "num_base_bdevs": 4, 00:33:32.241 "num_base_bdevs_discovered": 3, 00:33:32.241 "num_base_bdevs_operational": 3, 00:33:32.241 "base_bdevs_list": [ 00:33:32.241 { 00:33:32.241 "name": null, 00:33:32.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:32.241 "is_configured": false, 00:33:32.241 "data_offset": 0, 00:33:32.241 "data_size": 63488 00:33:32.241 }, 00:33:32.241 { 00:33:32.241 "name": "BaseBdev2", 00:33:32.241 "uuid": "f6b5f3f3-65a8-56f5-bf08-dd56bc140488", 00:33:32.241 "is_configured": true, 00:33:32.241 "data_offset": 2048, 00:33:32.241 "data_size": 63488 00:33:32.241 }, 00:33:32.241 { 00:33:32.241 "name": "BaseBdev3", 00:33:32.241 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:32.241 "is_configured": true, 00:33:32.241 "data_offset": 2048, 00:33:32.241 "data_size": 63488 00:33:32.241 }, 00:33:32.241 { 00:33:32.241 "name": "BaseBdev4", 00:33:32.241 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:32.241 "is_configured": true, 00:33:32.241 "data_offset": 2048, 00:33:32.241 "data_size": 63488 00:33:32.241 } 00:33:32.241 ] 00:33:32.241 }' 00:33:32.241 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:32.500 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:32.500 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:32.500 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:32.500 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:32.500 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.500 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:32.500 [2024-12-06 18:32:03.234421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:32.500 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.500 18:32:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:33:32.500 [2024-12-06 18:32:03.336136] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:33:32.500 [2024-12-06 18:32:03.338757] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:32.500 [2024-12-06 18:32:03.446675] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:32.758 [2024-12-06 18:32:03.448946] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:33:32.758 [2024-12-06 18:32:03.669578] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:32.758 [2024-12-06 18:32:03.670051] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:33:33.276 136.00 IOPS, 408.00 MiB/s [2024-12-06T18:32:04.225Z] [2024-12-06 18:32:04.110219] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:33.276 [2024-12-06 18:32:04.111402] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:33.535 "name": "raid_bdev1", 00:33:33.535 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:33.535 "strip_size_kb": 0, 00:33:33.535 "state": "online", 00:33:33.535 "raid_level": "raid1", 00:33:33.535 "superblock": true, 00:33:33.535 "num_base_bdevs": 4, 00:33:33.535 "num_base_bdevs_discovered": 4, 00:33:33.535 "num_base_bdevs_operational": 4, 00:33:33.535 "process": { 00:33:33.535 "type": "rebuild", 00:33:33.535 "target": "spare", 00:33:33.535 "progress": { 00:33:33.535 "blocks": 10240, 00:33:33.535 "percent": 16 00:33:33.535 } 00:33:33.535 }, 00:33:33.535 "base_bdevs_list": [ 00:33:33.535 { 00:33:33.535 "name": "spare", 00:33:33.535 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:33.535 "is_configured": true, 00:33:33.535 "data_offset": 2048, 00:33:33.535 "data_size": 63488 00:33:33.535 }, 00:33:33.535 { 00:33:33.535 "name": "BaseBdev2", 00:33:33.535 "uuid": "f6b5f3f3-65a8-56f5-bf08-dd56bc140488", 00:33:33.535 "is_configured": true, 00:33:33.535 "data_offset": 2048, 00:33:33.535 "data_size": 63488 00:33:33.535 }, 00:33:33.535 { 00:33:33.535 "name": "BaseBdev3", 00:33:33.535 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:33.535 "is_configured": true, 00:33:33.535 "data_offset": 2048, 00:33:33.535 "data_size": 63488 00:33:33.535 }, 00:33:33.535 { 00:33:33.535 "name": "BaseBdev4", 00:33:33.535 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:33.535 "is_configured": true, 00:33:33.535 "data_offset": 2048, 00:33:33.535 "data_size": 63488 00:33:33.535 } 00:33:33.535 ] 00:33:33.535 }' 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:33:33.535 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.535 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:33.535 [2024-12-06 18:32:04.431840] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:33.795 [2024-12-06 18:32:04.486043] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:33.795 [2024-12-06 18:32:04.689494] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:33:33.795 [2024-12-06 18:32:04.689552] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:33:33.795 [2024-12-06 18:32:04.698152] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:33.795 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.054 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:34.054 "name": "raid_bdev1", 00:33:34.054 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:34.054 "strip_size_kb": 0, 00:33:34.054 "state": "online", 00:33:34.054 "raid_level": "raid1", 00:33:34.054 "superblock": true, 00:33:34.054 "num_base_bdevs": 4, 00:33:34.054 "num_base_bdevs_discovered": 3, 00:33:34.054 "num_base_bdevs_operational": 3, 00:33:34.054 "process": { 00:33:34.054 "type": "rebuild", 00:33:34.054 "target": "spare", 00:33:34.054 "progress": { 00:33:34.054 "blocks": 14336, 00:33:34.054 "percent": 22 00:33:34.054 } 00:33:34.054 }, 00:33:34.054 "base_bdevs_list": [ 00:33:34.054 { 00:33:34.054 "name": "spare", 00:33:34.054 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:34.054 "is_configured": true, 00:33:34.054 "data_offset": 2048, 00:33:34.054 "data_size": 63488 00:33:34.054 }, 00:33:34.054 { 00:33:34.054 "name": null, 00:33:34.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.054 "is_configured": false, 00:33:34.054 "data_offset": 0, 00:33:34.054 "data_size": 63488 00:33:34.054 }, 00:33:34.054 { 00:33:34.054 "name": "BaseBdev3", 00:33:34.054 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:34.054 "is_configured": true, 00:33:34.054 "data_offset": 2048, 00:33:34.054 "data_size": 63488 00:33:34.054 }, 00:33:34.054 { 00:33:34.054 "name": "BaseBdev4", 00:33:34.054 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:34.055 "is_configured": true, 00:33:34.055 "data_offset": 2048, 00:33:34.055 "data_size": 63488 00:33:34.055 } 00:33:34.055 ] 00:33:34.055 }' 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=501 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:34.055 "name": "raid_bdev1", 00:33:34.055 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:34.055 "strip_size_kb": 0, 00:33:34.055 "state": "online", 00:33:34.055 "raid_level": "raid1", 00:33:34.055 "superblock": true, 00:33:34.055 "num_base_bdevs": 4, 00:33:34.055 "num_base_bdevs_discovered": 3, 00:33:34.055 "num_base_bdevs_operational": 3, 00:33:34.055 "process": { 00:33:34.055 "type": "rebuild", 00:33:34.055 "target": "spare", 00:33:34.055 "progress": { 00:33:34.055 "blocks": 14336, 00:33:34.055 "percent": 22 00:33:34.055 } 00:33:34.055 }, 00:33:34.055 "base_bdevs_list": [ 00:33:34.055 { 00:33:34.055 "name": "spare", 00:33:34.055 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:34.055 "is_configured": true, 00:33:34.055 "data_offset": 2048, 00:33:34.055 "data_size": 63488 00:33:34.055 }, 00:33:34.055 { 00:33:34.055 "name": null, 00:33:34.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:34.055 "is_configured": false, 00:33:34.055 "data_offset": 0, 00:33:34.055 "data_size": 63488 00:33:34.055 }, 00:33:34.055 { 00:33:34.055 "name": "BaseBdev3", 00:33:34.055 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:34.055 "is_configured": true, 00:33:34.055 "data_offset": 2048, 00:33:34.055 "data_size": 63488 00:33:34.055 }, 00:33:34.055 { 00:33:34.055 "name": "BaseBdev4", 00:33:34.055 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:34.055 "is_configured": true, 00:33:34.055 "data_offset": 2048, 00:33:34.055 "data_size": 63488 00:33:34.055 } 00:33:34.055 ] 00:33:34.055 }' 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:34.055 117.75 IOPS, 353.25 MiB/s [2024-12-06T18:32:05.004Z] 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:34.055 18:32:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:34.314 [2024-12-06 18:32:05.137445] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:33:34.314 [2024-12-06 18:32:05.254418] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:33:34.882 [2024-12-06 18:32:05.572788] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:33:35.141 [2024-12-06 18:32:05.912975] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:33:35.141 106.80 IOPS, 320.40 MiB/s [2024-12-06T18:32:06.090Z] 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:35.141 18:32:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:35.141 18:32:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.141 18:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:35.141 "name": "raid_bdev1", 00:33:35.141 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:35.141 "strip_size_kb": 0, 00:33:35.141 "state": "online", 00:33:35.141 "raid_level": "raid1", 00:33:35.141 "superblock": true, 00:33:35.141 "num_base_bdevs": 4, 00:33:35.141 "num_base_bdevs_discovered": 3, 00:33:35.141 "num_base_bdevs_operational": 3, 00:33:35.141 "process": { 00:33:35.141 "type": "rebuild", 00:33:35.141 "target": "spare", 00:33:35.141 "progress": { 00:33:35.141 "blocks": 34816, 00:33:35.141 "percent": 54 00:33:35.141 } 00:33:35.141 }, 00:33:35.141 "base_bdevs_list": [ 00:33:35.141 { 00:33:35.141 "name": "spare", 00:33:35.141 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:35.141 "is_configured": true, 00:33:35.141 "data_offset": 2048, 00:33:35.141 "data_size": 63488 00:33:35.141 }, 00:33:35.141 { 00:33:35.141 "name": null, 00:33:35.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:35.141 "is_configured": false, 00:33:35.141 "data_offset": 0, 00:33:35.141 "data_size": 63488 00:33:35.141 }, 00:33:35.141 { 00:33:35.141 "name": "BaseBdev3", 00:33:35.141 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:35.141 "is_configured": true, 00:33:35.141 "data_offset": 2048, 00:33:35.141 "data_size": 63488 00:33:35.141 }, 00:33:35.141 { 00:33:35.141 "name": "BaseBdev4", 00:33:35.141 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:35.141 "is_configured": true, 00:33:35.141 "data_offset": 2048, 00:33:35.141 "data_size": 63488 00:33:35.141 } 00:33:35.141 ] 00:33:35.141 }' 00:33:35.141 18:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:35.142 18:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:35.142 18:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:35.400 18:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:35.400 18:32:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:35.400 [2024-12-06 18:32:06.133974] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:33:35.400 [2024-12-06 18:32:06.240168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:33:35.400 [2024-12-06 18:32:06.240602] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:33:35.968 [2024-12-06 18:32:06.688234] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:33:36.228 95.67 IOPS, 287.00 MiB/s [2024-12-06T18:32:07.177Z] 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:36.228 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:36.228 "name": "raid_bdev1", 00:33:36.228 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:36.228 "strip_size_kb": 0, 00:33:36.228 "state": "online", 00:33:36.228 "raid_level": "raid1", 00:33:36.228 "superblock": true, 00:33:36.228 "num_base_bdevs": 4, 00:33:36.229 "num_base_bdevs_discovered": 3, 00:33:36.229 "num_base_bdevs_operational": 3, 00:33:36.229 "process": { 00:33:36.229 "type": "rebuild", 00:33:36.229 "target": "spare", 00:33:36.229 "progress": { 00:33:36.229 "blocks": 55296, 00:33:36.229 "percent": 87 00:33:36.229 } 00:33:36.229 }, 00:33:36.229 "base_bdevs_list": [ 00:33:36.229 { 00:33:36.229 "name": "spare", 00:33:36.229 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:36.229 "is_configured": true, 00:33:36.229 "data_offset": 2048, 00:33:36.229 "data_size": 63488 00:33:36.229 }, 00:33:36.229 { 00:33:36.229 "name": null, 00:33:36.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:36.229 "is_configured": false, 00:33:36.229 "data_offset": 0, 00:33:36.229 "data_size": 63488 00:33:36.229 }, 00:33:36.229 { 00:33:36.229 "name": "BaseBdev3", 00:33:36.229 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:36.229 "is_configured": true, 00:33:36.229 "data_offset": 2048, 00:33:36.229 "data_size": 63488 00:33:36.229 }, 00:33:36.229 { 00:33:36.229 "name": "BaseBdev4", 00:33:36.229 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:36.229 "is_configured": true, 00:33:36.229 "data_offset": 2048, 00:33:36.229 "data_size": 63488 00:33:36.229 } 00:33:36.229 ] 00:33:36.229 }' 00:33:36.229 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:36.488 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:36.488 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:36.488 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:36.488 18:32:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:33:36.488 [2024-12-06 18:32:07.247751] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:33:36.747 [2024-12-06 18:32:07.594924] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:33:37.006 [2024-12-06 18:32:07.700344] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:33:37.006 [2024-12-06 18:32:07.706587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:37.575 88.43 IOPS, 265.29 MiB/s [2024-12-06T18:32:08.524Z] 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.575 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:37.575 "name": "raid_bdev1", 00:33:37.575 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:37.575 "strip_size_kb": 0, 00:33:37.575 "state": "online", 00:33:37.575 "raid_level": "raid1", 00:33:37.575 "superblock": true, 00:33:37.575 "num_base_bdevs": 4, 00:33:37.575 "num_base_bdevs_discovered": 3, 00:33:37.575 "num_base_bdevs_operational": 3, 00:33:37.575 "base_bdevs_list": [ 00:33:37.575 { 00:33:37.575 "name": "spare", 00:33:37.575 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:37.575 "is_configured": true, 00:33:37.575 "data_offset": 2048, 00:33:37.575 "data_size": 63488 00:33:37.575 }, 00:33:37.575 { 00:33:37.575 "name": null, 00:33:37.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.575 "is_configured": false, 00:33:37.575 "data_offset": 0, 00:33:37.575 "data_size": 63488 00:33:37.575 }, 00:33:37.575 { 00:33:37.575 "name": "BaseBdev3", 00:33:37.575 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:37.575 "is_configured": true, 00:33:37.575 "data_offset": 2048, 00:33:37.576 "data_size": 63488 00:33:37.576 }, 00:33:37.576 { 00:33:37.576 "name": "BaseBdev4", 00:33:37.576 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:37.576 "is_configured": true, 00:33:37.576 "data_offset": 2048, 00:33:37.576 "data_size": 63488 00:33:37.576 } 00:33:37.576 ] 00:33:37.576 }' 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:37.576 "name": "raid_bdev1", 00:33:37.576 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:37.576 "strip_size_kb": 0, 00:33:37.576 "state": "online", 00:33:37.576 "raid_level": "raid1", 00:33:37.576 "superblock": true, 00:33:37.576 "num_base_bdevs": 4, 00:33:37.576 "num_base_bdevs_discovered": 3, 00:33:37.576 "num_base_bdevs_operational": 3, 00:33:37.576 "base_bdevs_list": [ 00:33:37.576 { 00:33:37.576 "name": "spare", 00:33:37.576 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:37.576 "is_configured": true, 00:33:37.576 "data_offset": 2048, 00:33:37.576 "data_size": 63488 00:33:37.576 }, 00:33:37.576 { 00:33:37.576 "name": null, 00:33:37.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.576 "is_configured": false, 00:33:37.576 "data_offset": 0, 00:33:37.576 "data_size": 63488 00:33:37.576 }, 00:33:37.576 { 00:33:37.576 "name": "BaseBdev3", 00:33:37.576 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:37.576 "is_configured": true, 00:33:37.576 "data_offset": 2048, 00:33:37.576 "data_size": 63488 00:33:37.576 }, 00:33:37.576 { 00:33:37.576 "name": "BaseBdev4", 00:33:37.576 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:37.576 "is_configured": true, 00:33:37.576 "data_offset": 2048, 00:33:37.576 "data_size": 63488 00:33:37.576 } 00:33:37.576 ] 00:33:37.576 }' 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:37.576 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:37.835 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:37.835 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:37.835 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:37.835 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:37.835 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:37.835 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:37.835 "name": "raid_bdev1", 00:33:37.835 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:37.835 "strip_size_kb": 0, 00:33:37.835 "state": "online", 00:33:37.835 "raid_level": "raid1", 00:33:37.835 "superblock": true, 00:33:37.835 "num_base_bdevs": 4, 00:33:37.835 "num_base_bdevs_discovered": 3, 00:33:37.835 "num_base_bdevs_operational": 3, 00:33:37.835 "base_bdevs_list": [ 00:33:37.835 { 00:33:37.835 "name": "spare", 00:33:37.835 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:37.835 "is_configured": true, 00:33:37.835 "data_offset": 2048, 00:33:37.835 "data_size": 63488 00:33:37.835 }, 00:33:37.835 { 00:33:37.835 "name": null, 00:33:37.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:37.835 "is_configured": false, 00:33:37.835 "data_offset": 0, 00:33:37.835 "data_size": 63488 00:33:37.835 }, 00:33:37.836 { 00:33:37.836 "name": "BaseBdev3", 00:33:37.836 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:37.836 "is_configured": true, 00:33:37.836 "data_offset": 2048, 00:33:37.836 "data_size": 63488 00:33:37.836 }, 00:33:37.836 { 00:33:37.836 "name": "BaseBdev4", 00:33:37.836 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:37.836 "is_configured": true, 00:33:37.836 "data_offset": 2048, 00:33:37.836 "data_size": 63488 00:33:37.836 } 00:33:37.836 ] 00:33:37.836 }' 00:33:37.836 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:37.836 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:38.095 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:33:38.095 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.095 18:32:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:38.095 [2024-12-06 18:32:08.918007] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:33:38.095 [2024-12-06 18:32:08.918049] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:38.095 81.00 IOPS, 243.00 MiB/s 00:33:38.095 Latency(us) 00:33:38.095 [2024-12-06T18:32:09.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.095 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:33:38.095 raid_bdev1 : 8.09 80.31 240.93 0.00 0.00 17939.61 305.97 113701.01 00:33:38.095 [2024-12-06T18:32:09.044Z] =================================================================================================================== 00:33:38.095 [2024-12-06T18:32:09.044Z] Total : 80.31 240.93 0.00 0.00 17939.61 305.97 113701.01 00:33:38.095 [2024-12-06 18:32:09.035187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:38.095 [2024-12-06 18:32:09.035304] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:38.095 [2024-12-06 18:32:09.035414] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:38.095 [2024-12-06 18:32:09.035431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:33:38.095 { 00:33:38.095 "results": [ 00:33:38.095 { 00:33:38.095 "job": "raid_bdev1", 00:33:38.095 "core_mask": "0x1", 00:33:38.095 "workload": "randrw", 00:33:38.095 "percentage": 50, 00:33:38.095 "status": "finished", 00:33:38.095 "queue_depth": 2, 00:33:38.095 "io_size": 3145728, 00:33:38.095 "runtime": 8.09353, 00:33:38.095 "iops": 80.31106328141121, 00:33:38.095 "mibps": 240.93318984423362, 00:33:38.095 "io_failed": 0, 00:33:38.095 "io_timeout": 0, 00:33:38.095 "avg_latency_us": 17939.61195675008, 00:33:38.095 "min_latency_us": 305.96626506024097, 00:33:38.095 "max_latency_us": 113701.01204819277 00:33:38.095 } 00:33:38.095 ], 00:33:38.095 "core_count": 1 00:33:38.095 } 00:33:38.095 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.377 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:38.377 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:33:38.377 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:38.377 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:38.378 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:33:38.378 /dev/nbd0 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:38.721 1+0 records in 00:33:38.721 1+0 records out 00:33:38.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328021 s, 12.5 MB/s 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:33:38.721 /dev/nbd1 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:38.721 1+0 records in 00:33:38.721 1+0 records out 00:33:38.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419075 s, 9.8 MB/s 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:38.721 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:38.980 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:33:38.980 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:38.980 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:38.980 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:38.980 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:38.980 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:38.980 18:32:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:39.240 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:33:39.500 /dev/nbd1 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:33:39.500 1+0 records in 00:33:39.500 1+0 records out 00:33:39.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587396 s, 7.0 MB/s 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:39.500 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:33:39.759 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.019 [2024-12-06 18:32:10.855219] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:40.019 [2024-12-06 18:32:10.855285] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:40.019 [2024-12-06 18:32:10.855312] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:33:40.019 [2024-12-06 18:32:10.855327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:40.019 [2024-12-06 18:32:10.858209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:40.019 [2024-12-06 18:32:10.858259] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:40.019 [2024-12-06 18:32:10.858363] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:40.019 [2024-12-06 18:32:10.858423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:40.019 [2024-12-06 18:32:10.858590] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:40.019 [2024-12-06 18:32:10.858703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:33:40.019 spare 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.019 [2024-12-06 18:32:10.958629] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:33:40.019 [2024-12-06 18:32:10.958666] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:33:40.019 [2024-12-06 18:32:10.959021] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:33:40.019 [2024-12-06 18:32:10.959241] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:33:40.019 [2024-12-06 18:32:10.959254] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:33:40.019 [2024-12-06 18:32:10.959450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.019 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.279 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.279 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.279 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.279 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.279 18:32:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.279 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.279 "name": "raid_bdev1", 00:33:40.279 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:40.279 "strip_size_kb": 0, 00:33:40.279 "state": "online", 00:33:40.279 "raid_level": "raid1", 00:33:40.279 "superblock": true, 00:33:40.279 "num_base_bdevs": 4, 00:33:40.279 "num_base_bdevs_discovered": 3, 00:33:40.279 "num_base_bdevs_operational": 3, 00:33:40.279 "base_bdevs_list": [ 00:33:40.279 { 00:33:40.279 "name": "spare", 00:33:40.279 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:40.279 "is_configured": true, 00:33:40.279 "data_offset": 2048, 00:33:40.279 "data_size": 63488 00:33:40.279 }, 00:33:40.279 { 00:33:40.279 "name": null, 00:33:40.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.279 "is_configured": false, 00:33:40.279 "data_offset": 2048, 00:33:40.279 "data_size": 63488 00:33:40.279 }, 00:33:40.279 { 00:33:40.279 "name": "BaseBdev3", 00:33:40.279 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:40.279 "is_configured": true, 00:33:40.279 "data_offset": 2048, 00:33:40.279 "data_size": 63488 00:33:40.279 }, 00:33:40.279 { 00:33:40.279 "name": "BaseBdev4", 00:33:40.279 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:40.279 "is_configured": true, 00:33:40.279 "data_offset": 2048, 00:33:40.279 "data_size": 63488 00:33:40.279 } 00:33:40.279 ] 00:33:40.279 }' 00:33:40.279 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.279 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:40.540 "name": "raid_bdev1", 00:33:40.540 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:40.540 "strip_size_kb": 0, 00:33:40.540 "state": "online", 00:33:40.540 "raid_level": "raid1", 00:33:40.540 "superblock": true, 00:33:40.540 "num_base_bdevs": 4, 00:33:40.540 "num_base_bdevs_discovered": 3, 00:33:40.540 "num_base_bdevs_operational": 3, 00:33:40.540 "base_bdevs_list": [ 00:33:40.540 { 00:33:40.540 "name": "spare", 00:33:40.540 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:40.540 "is_configured": true, 00:33:40.540 "data_offset": 2048, 00:33:40.540 "data_size": 63488 00:33:40.540 }, 00:33:40.540 { 00:33:40.540 "name": null, 00:33:40.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.540 "is_configured": false, 00:33:40.540 "data_offset": 2048, 00:33:40.540 "data_size": 63488 00:33:40.540 }, 00:33:40.540 { 00:33:40.540 "name": "BaseBdev3", 00:33:40.540 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:40.540 "is_configured": true, 00:33:40.540 "data_offset": 2048, 00:33:40.540 "data_size": 63488 00:33:40.540 }, 00:33:40.540 { 00:33:40.540 "name": "BaseBdev4", 00:33:40.540 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:40.540 "is_configured": true, 00:33:40.540 "data_offset": 2048, 00:33:40.540 "data_size": 63488 00:33:40.540 } 00:33:40.540 ] 00:33:40.540 }' 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:40.540 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.800 [2024-12-06 18:32:11.558858] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.800 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:40.800 "name": "raid_bdev1", 00:33:40.800 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:40.800 "strip_size_kb": 0, 00:33:40.800 "state": "online", 00:33:40.800 "raid_level": "raid1", 00:33:40.800 "superblock": true, 00:33:40.800 "num_base_bdevs": 4, 00:33:40.800 "num_base_bdevs_discovered": 2, 00:33:40.800 "num_base_bdevs_operational": 2, 00:33:40.800 "base_bdevs_list": [ 00:33:40.800 { 00:33:40.800 "name": null, 00:33:40.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.800 "is_configured": false, 00:33:40.800 "data_offset": 0, 00:33:40.800 "data_size": 63488 00:33:40.800 }, 00:33:40.800 { 00:33:40.800 "name": null, 00:33:40.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:40.800 "is_configured": false, 00:33:40.800 "data_offset": 2048, 00:33:40.800 "data_size": 63488 00:33:40.800 }, 00:33:40.800 { 00:33:40.800 "name": "BaseBdev3", 00:33:40.800 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:40.800 "is_configured": true, 00:33:40.800 "data_offset": 2048, 00:33:40.800 "data_size": 63488 00:33:40.800 }, 00:33:40.800 { 00:33:40.800 "name": "BaseBdev4", 00:33:40.800 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:40.800 "is_configured": true, 00:33:40.800 "data_offset": 2048, 00:33:40.801 "data_size": 63488 00:33:40.801 } 00:33:40.801 ] 00:33:40.801 }' 00:33:40.801 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:40.801 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:41.061 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:33:41.061 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:41.061 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:41.061 [2024-12-06 18:32:11.978696] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:41.061 [2024-12-06 18:32:11.978972] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:33:41.061 [2024-12-06 18:32:11.978996] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:41.061 [2024-12-06 18:32:11.979042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:41.061 [2024-12-06 18:32:11.994804] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:33:41.061 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:41.061 18:32:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:33:41.061 [2024-12-06 18:32:11.997303] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:42.439 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:42.439 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:42.439 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:42.439 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:42.439 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:42.439 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.439 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:42.440 "name": "raid_bdev1", 00:33:42.440 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:42.440 "strip_size_kb": 0, 00:33:42.440 "state": "online", 00:33:42.440 "raid_level": "raid1", 00:33:42.440 "superblock": true, 00:33:42.440 "num_base_bdevs": 4, 00:33:42.440 "num_base_bdevs_discovered": 3, 00:33:42.440 "num_base_bdevs_operational": 3, 00:33:42.440 "process": { 00:33:42.440 "type": "rebuild", 00:33:42.440 "target": "spare", 00:33:42.440 "progress": { 00:33:42.440 "blocks": 20480, 00:33:42.440 "percent": 32 00:33:42.440 } 00:33:42.440 }, 00:33:42.440 "base_bdevs_list": [ 00:33:42.440 { 00:33:42.440 "name": "spare", 00:33:42.440 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:42.440 "is_configured": true, 00:33:42.440 "data_offset": 2048, 00:33:42.440 "data_size": 63488 00:33:42.440 }, 00:33:42.440 { 00:33:42.440 "name": null, 00:33:42.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.440 "is_configured": false, 00:33:42.440 "data_offset": 2048, 00:33:42.440 "data_size": 63488 00:33:42.440 }, 00:33:42.440 { 00:33:42.440 "name": "BaseBdev3", 00:33:42.440 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:42.440 "is_configured": true, 00:33:42.440 "data_offset": 2048, 00:33:42.440 "data_size": 63488 00:33:42.440 }, 00:33:42.440 { 00:33:42.440 "name": "BaseBdev4", 00:33:42.440 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:42.440 "is_configured": true, 00:33:42.440 "data_offset": 2048, 00:33:42.440 "data_size": 63488 00:33:42.440 } 00:33:42.440 ] 00:33:42.440 }' 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:42.440 [2024-12-06 18:32:13.150033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:42.440 [2024-12-06 18:32:13.206769] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:42.440 [2024-12-06 18:32:13.206849] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:42.440 [2024-12-06 18:32:13.206868] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:42.440 [2024-12-06 18:32:13.206881] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:42.440 "name": "raid_bdev1", 00:33:42.440 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:42.440 "strip_size_kb": 0, 00:33:42.440 "state": "online", 00:33:42.440 "raid_level": "raid1", 00:33:42.440 "superblock": true, 00:33:42.440 "num_base_bdevs": 4, 00:33:42.440 "num_base_bdevs_discovered": 2, 00:33:42.440 "num_base_bdevs_operational": 2, 00:33:42.440 "base_bdevs_list": [ 00:33:42.440 { 00:33:42.440 "name": null, 00:33:42.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.440 "is_configured": false, 00:33:42.440 "data_offset": 0, 00:33:42.440 "data_size": 63488 00:33:42.440 }, 00:33:42.440 { 00:33:42.440 "name": null, 00:33:42.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:42.440 "is_configured": false, 00:33:42.440 "data_offset": 2048, 00:33:42.440 "data_size": 63488 00:33:42.440 }, 00:33:42.440 { 00:33:42.440 "name": "BaseBdev3", 00:33:42.440 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:42.440 "is_configured": true, 00:33:42.440 "data_offset": 2048, 00:33:42.440 "data_size": 63488 00:33:42.440 }, 00:33:42.440 { 00:33:42.440 "name": "BaseBdev4", 00:33:42.440 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:42.440 "is_configured": true, 00:33:42.440 "data_offset": 2048, 00:33:42.440 "data_size": 63488 00:33:42.440 } 00:33:42.440 ] 00:33:42.440 }' 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:42.440 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:43.009 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:33:43.009 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:43.009 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:43.009 [2024-12-06 18:32:13.654750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:33:43.009 [2024-12-06 18:32:13.654843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:43.009 [2024-12-06 18:32:13.654882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:33:43.009 [2024-12-06 18:32:13.654899] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:43.009 [2024-12-06 18:32:13.655537] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:43.009 [2024-12-06 18:32:13.655573] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:33:43.009 [2024-12-06 18:32:13.655688] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:33:43.009 [2024-12-06 18:32:13.655708] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:33:43.009 [2024-12-06 18:32:13.655721] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:33:43.009 [2024-12-06 18:32:13.655756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:33:43.009 [2024-12-06 18:32:13.670895] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:33:43.009 spare 00:33:43.009 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:43.009 18:32:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:33:43.009 [2024-12-06 18:32:13.673404] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:44.092 "name": "raid_bdev1", 00:33:44.092 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:44.092 "strip_size_kb": 0, 00:33:44.092 "state": "online", 00:33:44.092 "raid_level": "raid1", 00:33:44.092 "superblock": true, 00:33:44.092 "num_base_bdevs": 4, 00:33:44.092 "num_base_bdevs_discovered": 3, 00:33:44.092 "num_base_bdevs_operational": 3, 00:33:44.092 "process": { 00:33:44.092 "type": "rebuild", 00:33:44.092 "target": "spare", 00:33:44.092 "progress": { 00:33:44.092 "blocks": 20480, 00:33:44.092 "percent": 32 00:33:44.092 } 00:33:44.092 }, 00:33:44.092 "base_bdevs_list": [ 00:33:44.092 { 00:33:44.092 "name": "spare", 00:33:44.092 "uuid": "76c9c669-e82f-53c6-82a6-83a8b08411a8", 00:33:44.092 "is_configured": true, 00:33:44.092 "data_offset": 2048, 00:33:44.092 "data_size": 63488 00:33:44.092 }, 00:33:44.092 { 00:33:44.092 "name": null, 00:33:44.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.092 "is_configured": false, 00:33:44.092 "data_offset": 2048, 00:33:44.092 "data_size": 63488 00:33:44.092 }, 00:33:44.092 { 00:33:44.092 "name": "BaseBdev3", 00:33:44.092 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:44.092 "is_configured": true, 00:33:44.092 "data_offset": 2048, 00:33:44.092 "data_size": 63488 00:33:44.092 }, 00:33:44.092 { 00:33:44.092 "name": "BaseBdev4", 00:33:44.092 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:44.092 "is_configured": true, 00:33:44.092 "data_offset": 2048, 00:33:44.092 "data_size": 63488 00:33:44.092 } 00:33:44.092 ] 00:33:44.092 }' 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.092 [2024-12-06 18:32:14.825390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:44.092 [2024-12-06 18:32:14.882964] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:33:44.092 [2024-12-06 18:32:14.883035] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:44.092 [2024-12-06 18:32:14.883059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:33:44.092 [2024-12-06 18:32:14.883069] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.092 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.093 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.093 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.093 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.093 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:44.093 "name": "raid_bdev1", 00:33:44.093 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:44.093 "strip_size_kb": 0, 00:33:44.093 "state": "online", 00:33:44.093 "raid_level": "raid1", 00:33:44.093 "superblock": true, 00:33:44.093 "num_base_bdevs": 4, 00:33:44.093 "num_base_bdevs_discovered": 2, 00:33:44.093 "num_base_bdevs_operational": 2, 00:33:44.093 "base_bdevs_list": [ 00:33:44.093 { 00:33:44.093 "name": null, 00:33:44.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.093 "is_configured": false, 00:33:44.093 "data_offset": 0, 00:33:44.093 "data_size": 63488 00:33:44.093 }, 00:33:44.093 { 00:33:44.093 "name": null, 00:33:44.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.093 "is_configured": false, 00:33:44.093 "data_offset": 2048, 00:33:44.093 "data_size": 63488 00:33:44.093 }, 00:33:44.093 { 00:33:44.093 "name": "BaseBdev3", 00:33:44.093 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:44.093 "is_configured": true, 00:33:44.093 "data_offset": 2048, 00:33:44.093 "data_size": 63488 00:33:44.093 }, 00:33:44.093 { 00:33:44.093 "name": "BaseBdev4", 00:33:44.093 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:44.093 "is_configured": true, 00:33:44.093 "data_offset": 2048, 00:33:44.093 "data_size": 63488 00:33:44.093 } 00:33:44.093 ] 00:33:44.093 }' 00:33:44.093 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:44.093 18:32:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:44.659 "name": "raid_bdev1", 00:33:44.659 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:44.659 "strip_size_kb": 0, 00:33:44.659 "state": "online", 00:33:44.659 "raid_level": "raid1", 00:33:44.659 "superblock": true, 00:33:44.659 "num_base_bdevs": 4, 00:33:44.659 "num_base_bdevs_discovered": 2, 00:33:44.659 "num_base_bdevs_operational": 2, 00:33:44.659 "base_bdevs_list": [ 00:33:44.659 { 00:33:44.659 "name": null, 00:33:44.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.659 "is_configured": false, 00:33:44.659 "data_offset": 0, 00:33:44.659 "data_size": 63488 00:33:44.659 }, 00:33:44.659 { 00:33:44.659 "name": null, 00:33:44.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:44.659 "is_configured": false, 00:33:44.659 "data_offset": 2048, 00:33:44.659 "data_size": 63488 00:33:44.659 }, 00:33:44.659 { 00:33:44.659 "name": "BaseBdev3", 00:33:44.659 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:44.659 "is_configured": true, 00:33:44.659 "data_offset": 2048, 00:33:44.659 "data_size": 63488 00:33:44.659 }, 00:33:44.659 { 00:33:44.659 "name": "BaseBdev4", 00:33:44.659 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:44.659 "is_configured": true, 00:33:44.659 "data_offset": 2048, 00:33:44.659 "data_size": 63488 00:33:44.659 } 00:33:44.659 ] 00:33:44.659 }' 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:44.659 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:44.659 [2024-12-06 18:32:15.454688] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:33:44.659 [2024-12-06 18:32:15.454752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:33:44.659 [2024-12-06 18:32:15.454781] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:33:44.659 [2024-12-06 18:32:15.454794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:33:44.659 [2024-12-06 18:32:15.455378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:33:44.659 [2024-12-06 18:32:15.455406] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:33:44.660 [2024-12-06 18:32:15.455510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:33:44.660 [2024-12-06 18:32:15.455531] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:33:44.660 [2024-12-06 18:32:15.455546] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:44.660 [2024-12-06 18:32:15.455559] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:33:44.660 BaseBdev1 00:33:44.660 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:44.660 18:32:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:45.641 "name": "raid_bdev1", 00:33:45.641 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:45.641 "strip_size_kb": 0, 00:33:45.641 "state": "online", 00:33:45.641 "raid_level": "raid1", 00:33:45.641 "superblock": true, 00:33:45.641 "num_base_bdevs": 4, 00:33:45.641 "num_base_bdevs_discovered": 2, 00:33:45.641 "num_base_bdevs_operational": 2, 00:33:45.641 "base_bdevs_list": [ 00:33:45.641 { 00:33:45.641 "name": null, 00:33:45.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.641 "is_configured": false, 00:33:45.641 "data_offset": 0, 00:33:45.641 "data_size": 63488 00:33:45.641 }, 00:33:45.641 { 00:33:45.641 "name": null, 00:33:45.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:45.641 "is_configured": false, 00:33:45.641 "data_offset": 2048, 00:33:45.641 "data_size": 63488 00:33:45.641 }, 00:33:45.641 { 00:33:45.641 "name": "BaseBdev3", 00:33:45.641 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:45.641 "is_configured": true, 00:33:45.641 "data_offset": 2048, 00:33:45.641 "data_size": 63488 00:33:45.641 }, 00:33:45.641 { 00:33:45.641 "name": "BaseBdev4", 00:33:45.641 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:45.641 "is_configured": true, 00:33:45.641 "data_offset": 2048, 00:33:45.641 "data_size": 63488 00:33:45.641 } 00:33:45.641 ] 00:33:45.641 }' 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:45.641 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:46.208 "name": "raid_bdev1", 00:33:46.208 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:46.208 "strip_size_kb": 0, 00:33:46.208 "state": "online", 00:33:46.208 "raid_level": "raid1", 00:33:46.208 "superblock": true, 00:33:46.208 "num_base_bdevs": 4, 00:33:46.208 "num_base_bdevs_discovered": 2, 00:33:46.208 "num_base_bdevs_operational": 2, 00:33:46.208 "base_bdevs_list": [ 00:33:46.208 { 00:33:46.208 "name": null, 00:33:46.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.208 "is_configured": false, 00:33:46.208 "data_offset": 0, 00:33:46.208 "data_size": 63488 00:33:46.208 }, 00:33:46.208 { 00:33:46.208 "name": null, 00:33:46.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:46.208 "is_configured": false, 00:33:46.208 "data_offset": 2048, 00:33:46.208 "data_size": 63488 00:33:46.208 }, 00:33:46.208 { 00:33:46.208 "name": "BaseBdev3", 00:33:46.208 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:46.208 "is_configured": true, 00:33:46.208 "data_offset": 2048, 00:33:46.208 "data_size": 63488 00:33:46.208 }, 00:33:46.208 { 00:33:46.208 "name": "BaseBdev4", 00:33:46.208 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:46.208 "is_configured": true, 00:33:46.208 "data_offset": 2048, 00:33:46.208 "data_size": 63488 00:33:46.208 } 00:33:46.208 ] 00:33:46.208 }' 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:33:46.208 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:46.209 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:46.209 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.209 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:46.209 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:46.209 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:33:46.209 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.209 18:32:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:46.209 [2024-12-06 18:32:16.996961] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:46.209 [2024-12-06 18:32:16.997180] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:33:46.209 [2024-12-06 18:32:16.997200] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:33:46.209 request: 00:33:46.209 { 00:33:46.209 "base_bdev": "BaseBdev1", 00:33:46.209 "raid_bdev": "raid_bdev1", 00:33:46.209 "method": "bdev_raid_add_base_bdev", 00:33:46.209 "req_id": 1 00:33:46.209 } 00:33:46.209 Got JSON-RPC error response 00:33:46.209 response: 00:33:46.209 { 00:33:46.209 "code": -22, 00:33:46.209 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:33:46.209 } 00:33:46.209 18:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:46.209 18:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:33:46.209 18:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:46.209 18:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:46.209 18:32:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:46.209 18:32:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:47.144 "name": "raid_bdev1", 00:33:47.144 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:47.144 "strip_size_kb": 0, 00:33:47.144 "state": "online", 00:33:47.144 "raid_level": "raid1", 00:33:47.144 "superblock": true, 00:33:47.144 "num_base_bdevs": 4, 00:33:47.144 "num_base_bdevs_discovered": 2, 00:33:47.144 "num_base_bdevs_operational": 2, 00:33:47.144 "base_bdevs_list": [ 00:33:47.144 { 00:33:47.144 "name": null, 00:33:47.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.144 "is_configured": false, 00:33:47.144 "data_offset": 0, 00:33:47.144 "data_size": 63488 00:33:47.144 }, 00:33:47.144 { 00:33:47.144 "name": null, 00:33:47.144 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.144 "is_configured": false, 00:33:47.144 "data_offset": 2048, 00:33:47.144 "data_size": 63488 00:33:47.144 }, 00:33:47.144 { 00:33:47.144 "name": "BaseBdev3", 00:33:47.144 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:47.144 "is_configured": true, 00:33:47.144 "data_offset": 2048, 00:33:47.144 "data_size": 63488 00:33:47.144 }, 00:33:47.144 { 00:33:47.144 "name": "BaseBdev4", 00:33:47.144 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:47.144 "is_configured": true, 00:33:47.144 "data_offset": 2048, 00:33:47.144 "data_size": 63488 00:33:47.144 } 00:33:47.144 ] 00:33:47.144 }' 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:47.144 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:33:47.710 "name": "raid_bdev1", 00:33:47.710 "uuid": "199347a2-85c7-47c7-940c-37fcf04983f1", 00:33:47.710 "strip_size_kb": 0, 00:33:47.710 "state": "online", 00:33:47.710 "raid_level": "raid1", 00:33:47.710 "superblock": true, 00:33:47.710 "num_base_bdevs": 4, 00:33:47.710 "num_base_bdevs_discovered": 2, 00:33:47.710 "num_base_bdevs_operational": 2, 00:33:47.710 "base_bdevs_list": [ 00:33:47.710 { 00:33:47.710 "name": null, 00:33:47.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.710 "is_configured": false, 00:33:47.710 "data_offset": 0, 00:33:47.710 "data_size": 63488 00:33:47.710 }, 00:33:47.710 { 00:33:47.710 "name": null, 00:33:47.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:47.710 "is_configured": false, 00:33:47.710 "data_offset": 2048, 00:33:47.710 "data_size": 63488 00:33:47.710 }, 00:33:47.710 { 00:33:47.710 "name": "BaseBdev3", 00:33:47.710 "uuid": "e22b9e99-6fa4-5138-9638-653e28acd0a4", 00:33:47.710 "is_configured": true, 00:33:47.710 "data_offset": 2048, 00:33:47.710 "data_size": 63488 00:33:47.710 }, 00:33:47.710 { 00:33:47.710 "name": "BaseBdev4", 00:33:47.710 "uuid": "4e73ea17-1207-5d67-ac7b-a56386e83c5b", 00:33:47.710 "is_configured": true, 00:33:47.710 "data_offset": 2048, 00:33:47.710 "data_size": 63488 00:33:47.710 } 00:33:47.710 ] 00:33:47.710 }' 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 78909 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 78909 ']' 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 78909 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78909 00:33:47.710 killing process with pid 78909 00:33:47.710 Received shutdown signal, test time was about 17.672940 seconds 00:33:47.710 00:33:47.710 Latency(us) 00:33:47.710 [2024-12-06T18:32:18.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.710 [2024-12-06T18:32:18.659Z] =================================================================================================================== 00:33:47.710 [2024-12-06T18:32:18.659Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:47.710 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:47.711 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78909' 00:33:47.711 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 78909 00:33:47.711 [2024-12-06 18:32:18.577662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:47.711 [2024-12-06 18:32:18.577819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:47.711 [2024-12-06 18:32:18.577897] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:47.711 18:32:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 78909 00:33:47.711 [2024-12-06 18:32:18.577918] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:33:48.279 [2024-12-06 18:32:19.027855] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:33:49.660 18:32:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:33:49.660 00:33:49.660 real 0m21.220s 00:33:49.660 user 0m27.106s 00:33:49.660 sys 0m3.043s 00:33:49.660 ************************************ 00:33:49.660 END TEST raid_rebuild_test_sb_io 00:33:49.660 ************************************ 00:33:49.660 18:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:49.660 18:32:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:33:49.660 18:32:20 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:33:49.660 18:32:20 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:33:49.660 18:32:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:33:49.660 18:32:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.660 18:32:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:33:49.660 ************************************ 00:33:49.660 START TEST raid5f_state_function_test 00:33:49.660 ************************************ 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=79632 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:33:49.660 Process raid pid: 79632 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 79632' 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 79632 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 79632 ']' 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.660 18:32:20 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:49.660 [2024-12-06 18:32:20.500075] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:33:49.661 [2024-12-06 18:32:20.500376] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.920 [2024-12-06 18:32:20.682051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.920 [2024-12-06 18:32:20.811741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:50.180 [2024-12-06 18:32:21.055214] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:50.180 [2024-12-06 18:32:21.055255] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:50.440 [2024-12-06 18:32:21.324765] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:50.440 [2024-12-06 18:32:21.324829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:50.440 [2024-12-06 18:32:21.324841] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:50.440 [2024-12-06 18:32:21.324855] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:50.440 [2024-12-06 18:32:21.324862] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:50.440 [2024-12-06 18:32:21.324875] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:50.440 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:50.441 "name": "Existed_Raid", 00:33:50.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.441 "strip_size_kb": 64, 00:33:50.441 "state": "configuring", 00:33:50.441 "raid_level": "raid5f", 00:33:50.441 "superblock": false, 00:33:50.441 "num_base_bdevs": 3, 00:33:50.441 "num_base_bdevs_discovered": 0, 00:33:50.441 "num_base_bdevs_operational": 3, 00:33:50.441 "base_bdevs_list": [ 00:33:50.441 { 00:33:50.441 "name": "BaseBdev1", 00:33:50.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.441 "is_configured": false, 00:33:50.441 "data_offset": 0, 00:33:50.441 "data_size": 0 00:33:50.441 }, 00:33:50.441 { 00:33:50.441 "name": "BaseBdev2", 00:33:50.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.441 "is_configured": false, 00:33:50.441 "data_offset": 0, 00:33:50.441 "data_size": 0 00:33:50.441 }, 00:33:50.441 { 00:33:50.441 "name": "BaseBdev3", 00:33:50.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:50.441 "is_configured": false, 00:33:50.441 "data_offset": 0, 00:33:50.441 "data_size": 0 00:33:50.441 } 00:33:50.441 ] 00:33:50.441 }' 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:50.441 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.010 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:51.010 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.010 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.010 [2024-12-06 18:32:21.772062] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:51.010 [2024-12-06 18:32:21.772103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:33:51.010 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.010 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:51.010 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.010 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.010 [2024-12-06 18:32:21.784063] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:51.010 [2024-12-06 18:32:21.784231] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:51.010 [2024-12-06 18:32:21.784255] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:51.010 [2024-12-06 18:32:21.784270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:51.010 [2024-12-06 18:32:21.784278] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:51.010 [2024-12-06 18:32:21.784291] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:51.010 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.011 [2024-12-06 18:32:21.837688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:51.011 BaseBdev1 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.011 [ 00:33:51.011 { 00:33:51.011 "name": "BaseBdev1", 00:33:51.011 "aliases": [ 00:33:51.011 "51b54eda-de20-417e-a558-a0b2f702054e" 00:33:51.011 ], 00:33:51.011 "product_name": "Malloc disk", 00:33:51.011 "block_size": 512, 00:33:51.011 "num_blocks": 65536, 00:33:51.011 "uuid": "51b54eda-de20-417e-a558-a0b2f702054e", 00:33:51.011 "assigned_rate_limits": { 00:33:51.011 "rw_ios_per_sec": 0, 00:33:51.011 "rw_mbytes_per_sec": 0, 00:33:51.011 "r_mbytes_per_sec": 0, 00:33:51.011 "w_mbytes_per_sec": 0 00:33:51.011 }, 00:33:51.011 "claimed": true, 00:33:51.011 "claim_type": "exclusive_write", 00:33:51.011 "zoned": false, 00:33:51.011 "supported_io_types": { 00:33:51.011 "read": true, 00:33:51.011 "write": true, 00:33:51.011 "unmap": true, 00:33:51.011 "flush": true, 00:33:51.011 "reset": true, 00:33:51.011 "nvme_admin": false, 00:33:51.011 "nvme_io": false, 00:33:51.011 "nvme_io_md": false, 00:33:51.011 "write_zeroes": true, 00:33:51.011 "zcopy": true, 00:33:51.011 "get_zone_info": false, 00:33:51.011 "zone_management": false, 00:33:51.011 "zone_append": false, 00:33:51.011 "compare": false, 00:33:51.011 "compare_and_write": false, 00:33:51.011 "abort": true, 00:33:51.011 "seek_hole": false, 00:33:51.011 "seek_data": false, 00:33:51.011 "copy": true, 00:33:51.011 "nvme_iov_md": false 00:33:51.011 }, 00:33:51.011 "memory_domains": [ 00:33:51.011 { 00:33:51.011 "dma_device_id": "system", 00:33:51.011 "dma_device_type": 1 00:33:51.011 }, 00:33:51.011 { 00:33:51.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:51.011 "dma_device_type": 2 00:33:51.011 } 00:33:51.011 ], 00:33:51.011 "driver_specific": {} 00:33:51.011 } 00:33:51.011 ] 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.011 "name": "Existed_Raid", 00:33:51.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.011 "strip_size_kb": 64, 00:33:51.011 "state": "configuring", 00:33:51.011 "raid_level": "raid5f", 00:33:51.011 "superblock": false, 00:33:51.011 "num_base_bdevs": 3, 00:33:51.011 "num_base_bdevs_discovered": 1, 00:33:51.011 "num_base_bdevs_operational": 3, 00:33:51.011 "base_bdevs_list": [ 00:33:51.011 { 00:33:51.011 "name": "BaseBdev1", 00:33:51.011 "uuid": "51b54eda-de20-417e-a558-a0b2f702054e", 00:33:51.011 "is_configured": true, 00:33:51.011 "data_offset": 0, 00:33:51.011 "data_size": 65536 00:33:51.011 }, 00:33:51.011 { 00:33:51.011 "name": "BaseBdev2", 00:33:51.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.011 "is_configured": false, 00:33:51.011 "data_offset": 0, 00:33:51.011 "data_size": 0 00:33:51.011 }, 00:33:51.011 { 00:33:51.011 "name": "BaseBdev3", 00:33:51.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.011 "is_configured": false, 00:33:51.011 "data_offset": 0, 00:33:51.011 "data_size": 0 00:33:51.011 } 00:33:51.011 ] 00:33:51.011 }' 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.011 18:32:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.582 [2024-12-06 18:32:22.301123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:51.582 [2024-12-06 18:32:22.301189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.582 [2024-12-06 18:32:22.309183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:51.582 [2024-12-06 18:32:22.311553] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:33:51.582 [2024-12-06 18:32:22.311603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:33:51.582 [2024-12-06 18:32:22.311615] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:33:51.582 [2024-12-06 18:32:22.311628] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:51.582 "name": "Existed_Raid", 00:33:51.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.582 "strip_size_kb": 64, 00:33:51.582 "state": "configuring", 00:33:51.582 "raid_level": "raid5f", 00:33:51.582 "superblock": false, 00:33:51.582 "num_base_bdevs": 3, 00:33:51.582 "num_base_bdevs_discovered": 1, 00:33:51.582 "num_base_bdevs_operational": 3, 00:33:51.582 "base_bdevs_list": [ 00:33:51.582 { 00:33:51.582 "name": "BaseBdev1", 00:33:51.582 "uuid": "51b54eda-de20-417e-a558-a0b2f702054e", 00:33:51.582 "is_configured": true, 00:33:51.582 "data_offset": 0, 00:33:51.582 "data_size": 65536 00:33:51.582 }, 00:33:51.582 { 00:33:51.582 "name": "BaseBdev2", 00:33:51.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.582 "is_configured": false, 00:33:51.582 "data_offset": 0, 00:33:51.582 "data_size": 0 00:33:51.582 }, 00:33:51.582 { 00:33:51.582 "name": "BaseBdev3", 00:33:51.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:51.582 "is_configured": false, 00:33:51.582 "data_offset": 0, 00:33:51.582 "data_size": 0 00:33:51.582 } 00:33:51.582 ] 00:33:51.582 }' 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:51.582 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.842 [2024-12-06 18:32:22.777746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:51.842 BaseBdev2 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:51.842 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.102 [ 00:33:52.102 { 00:33:52.102 "name": "BaseBdev2", 00:33:52.102 "aliases": [ 00:33:52.102 "5ef97a49-70a2-4a3c-b470-00c550838da8" 00:33:52.102 ], 00:33:52.102 "product_name": "Malloc disk", 00:33:52.102 "block_size": 512, 00:33:52.102 "num_blocks": 65536, 00:33:52.102 "uuid": "5ef97a49-70a2-4a3c-b470-00c550838da8", 00:33:52.102 "assigned_rate_limits": { 00:33:52.102 "rw_ios_per_sec": 0, 00:33:52.102 "rw_mbytes_per_sec": 0, 00:33:52.102 "r_mbytes_per_sec": 0, 00:33:52.102 "w_mbytes_per_sec": 0 00:33:52.102 }, 00:33:52.102 "claimed": true, 00:33:52.102 "claim_type": "exclusive_write", 00:33:52.102 "zoned": false, 00:33:52.102 "supported_io_types": { 00:33:52.102 "read": true, 00:33:52.102 "write": true, 00:33:52.102 "unmap": true, 00:33:52.102 "flush": true, 00:33:52.102 "reset": true, 00:33:52.102 "nvme_admin": false, 00:33:52.102 "nvme_io": false, 00:33:52.102 "nvme_io_md": false, 00:33:52.102 "write_zeroes": true, 00:33:52.102 "zcopy": true, 00:33:52.102 "get_zone_info": false, 00:33:52.102 "zone_management": false, 00:33:52.102 "zone_append": false, 00:33:52.102 "compare": false, 00:33:52.102 "compare_and_write": false, 00:33:52.102 "abort": true, 00:33:52.102 "seek_hole": false, 00:33:52.102 "seek_data": false, 00:33:52.102 "copy": true, 00:33:52.102 "nvme_iov_md": false 00:33:52.102 }, 00:33:52.102 "memory_domains": [ 00:33:52.102 { 00:33:52.102 "dma_device_id": "system", 00:33:52.102 "dma_device_type": 1 00:33:52.102 }, 00:33:52.102 { 00:33:52.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.102 "dma_device_type": 2 00:33:52.102 } 00:33:52.102 ], 00:33:52.102 "driver_specific": {} 00:33:52.102 } 00:33:52.102 ] 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:52.102 "name": "Existed_Raid", 00:33:52.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.102 "strip_size_kb": 64, 00:33:52.102 "state": "configuring", 00:33:52.102 "raid_level": "raid5f", 00:33:52.102 "superblock": false, 00:33:52.102 "num_base_bdevs": 3, 00:33:52.102 "num_base_bdevs_discovered": 2, 00:33:52.102 "num_base_bdevs_operational": 3, 00:33:52.102 "base_bdevs_list": [ 00:33:52.102 { 00:33:52.102 "name": "BaseBdev1", 00:33:52.102 "uuid": "51b54eda-de20-417e-a558-a0b2f702054e", 00:33:52.102 "is_configured": true, 00:33:52.102 "data_offset": 0, 00:33:52.102 "data_size": 65536 00:33:52.102 }, 00:33:52.102 { 00:33:52.102 "name": "BaseBdev2", 00:33:52.102 "uuid": "5ef97a49-70a2-4a3c-b470-00c550838da8", 00:33:52.102 "is_configured": true, 00:33:52.102 "data_offset": 0, 00:33:52.102 "data_size": 65536 00:33:52.102 }, 00:33:52.102 { 00:33:52.102 "name": "BaseBdev3", 00:33:52.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:52.102 "is_configured": false, 00:33:52.102 "data_offset": 0, 00:33:52.102 "data_size": 0 00:33:52.102 } 00:33:52.102 ] 00:33:52.102 }' 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:52.102 18:32:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.362 [2024-12-06 18:32:23.279486] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:52.362 [2024-12-06 18:32:23.279563] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:33:52.362 [2024-12-06 18:32:23.279584] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:33:52.362 [2024-12-06 18:32:23.279898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:33:52.362 [2024-12-06 18:32:23.286198] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:33:52.362 [2024-12-06 18:32:23.286326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:33:52.362 [2024-12-06 18:32:23.286698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:52.362 BaseBdev3 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.362 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.362 [ 00:33:52.622 { 00:33:52.622 "name": "BaseBdev3", 00:33:52.622 "aliases": [ 00:33:52.622 "975d9fe1-2f8f-435c-a6e9-d0dd104adbe6" 00:33:52.622 ], 00:33:52.622 "product_name": "Malloc disk", 00:33:52.622 "block_size": 512, 00:33:52.622 "num_blocks": 65536, 00:33:52.622 "uuid": "975d9fe1-2f8f-435c-a6e9-d0dd104adbe6", 00:33:52.622 "assigned_rate_limits": { 00:33:52.622 "rw_ios_per_sec": 0, 00:33:52.622 "rw_mbytes_per_sec": 0, 00:33:52.622 "r_mbytes_per_sec": 0, 00:33:52.622 "w_mbytes_per_sec": 0 00:33:52.622 }, 00:33:52.622 "claimed": true, 00:33:52.622 "claim_type": "exclusive_write", 00:33:52.622 "zoned": false, 00:33:52.622 "supported_io_types": { 00:33:52.622 "read": true, 00:33:52.622 "write": true, 00:33:52.622 "unmap": true, 00:33:52.622 "flush": true, 00:33:52.622 "reset": true, 00:33:52.622 "nvme_admin": false, 00:33:52.622 "nvme_io": false, 00:33:52.622 "nvme_io_md": false, 00:33:52.622 "write_zeroes": true, 00:33:52.622 "zcopy": true, 00:33:52.622 "get_zone_info": false, 00:33:52.622 "zone_management": false, 00:33:52.622 "zone_append": false, 00:33:52.622 "compare": false, 00:33:52.622 "compare_and_write": false, 00:33:52.622 "abort": true, 00:33:52.622 "seek_hole": false, 00:33:52.622 "seek_data": false, 00:33:52.622 "copy": true, 00:33:52.622 "nvme_iov_md": false 00:33:52.622 }, 00:33:52.622 "memory_domains": [ 00:33:52.622 { 00:33:52.622 "dma_device_id": "system", 00:33:52.622 "dma_device_type": 1 00:33:52.622 }, 00:33:52.622 { 00:33:52.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:52.622 "dma_device_type": 2 00:33:52.622 } 00:33:52.622 ], 00:33:52.622 "driver_specific": {} 00:33:52.622 } 00:33:52.622 ] 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.622 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:52.622 "name": "Existed_Raid", 00:33:52.622 "uuid": "95324d7f-45b5-46a3-bf03-058ebc0b2595", 00:33:52.622 "strip_size_kb": 64, 00:33:52.622 "state": "online", 00:33:52.622 "raid_level": "raid5f", 00:33:52.623 "superblock": false, 00:33:52.623 "num_base_bdevs": 3, 00:33:52.623 "num_base_bdevs_discovered": 3, 00:33:52.623 "num_base_bdevs_operational": 3, 00:33:52.623 "base_bdevs_list": [ 00:33:52.623 { 00:33:52.623 "name": "BaseBdev1", 00:33:52.623 "uuid": "51b54eda-de20-417e-a558-a0b2f702054e", 00:33:52.623 "is_configured": true, 00:33:52.623 "data_offset": 0, 00:33:52.623 "data_size": 65536 00:33:52.623 }, 00:33:52.623 { 00:33:52.623 "name": "BaseBdev2", 00:33:52.623 "uuid": "5ef97a49-70a2-4a3c-b470-00c550838da8", 00:33:52.623 "is_configured": true, 00:33:52.623 "data_offset": 0, 00:33:52.623 "data_size": 65536 00:33:52.623 }, 00:33:52.623 { 00:33:52.623 "name": "BaseBdev3", 00:33:52.623 "uuid": "975d9fe1-2f8f-435c-a6e9-d0dd104adbe6", 00:33:52.623 "is_configured": true, 00:33:52.623 "data_offset": 0, 00:33:52.623 "data_size": 65536 00:33:52.623 } 00:33:52.623 ] 00:33:52.623 }' 00:33:52.623 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:52.623 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:52.883 [2024-12-06 18:32:23.757503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:52.883 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:52.883 "name": "Existed_Raid", 00:33:52.883 "aliases": [ 00:33:52.883 "95324d7f-45b5-46a3-bf03-058ebc0b2595" 00:33:52.883 ], 00:33:52.883 "product_name": "Raid Volume", 00:33:52.883 "block_size": 512, 00:33:52.883 "num_blocks": 131072, 00:33:52.883 "uuid": "95324d7f-45b5-46a3-bf03-058ebc0b2595", 00:33:52.883 "assigned_rate_limits": { 00:33:52.883 "rw_ios_per_sec": 0, 00:33:52.883 "rw_mbytes_per_sec": 0, 00:33:52.883 "r_mbytes_per_sec": 0, 00:33:52.883 "w_mbytes_per_sec": 0 00:33:52.883 }, 00:33:52.883 "claimed": false, 00:33:52.883 "zoned": false, 00:33:52.883 "supported_io_types": { 00:33:52.883 "read": true, 00:33:52.883 "write": true, 00:33:52.883 "unmap": false, 00:33:52.883 "flush": false, 00:33:52.883 "reset": true, 00:33:52.883 "nvme_admin": false, 00:33:52.883 "nvme_io": false, 00:33:52.883 "nvme_io_md": false, 00:33:52.883 "write_zeroes": true, 00:33:52.883 "zcopy": false, 00:33:52.883 "get_zone_info": false, 00:33:52.883 "zone_management": false, 00:33:52.883 "zone_append": false, 00:33:52.883 "compare": false, 00:33:52.883 "compare_and_write": false, 00:33:52.883 "abort": false, 00:33:52.883 "seek_hole": false, 00:33:52.883 "seek_data": false, 00:33:52.883 "copy": false, 00:33:52.883 "nvme_iov_md": false 00:33:52.883 }, 00:33:52.883 "driver_specific": { 00:33:52.883 "raid": { 00:33:52.883 "uuid": "95324d7f-45b5-46a3-bf03-058ebc0b2595", 00:33:52.883 "strip_size_kb": 64, 00:33:52.883 "state": "online", 00:33:52.883 "raid_level": "raid5f", 00:33:52.883 "superblock": false, 00:33:52.883 "num_base_bdevs": 3, 00:33:52.883 "num_base_bdevs_discovered": 3, 00:33:52.883 "num_base_bdevs_operational": 3, 00:33:52.883 "base_bdevs_list": [ 00:33:52.883 { 00:33:52.884 "name": "BaseBdev1", 00:33:52.884 "uuid": "51b54eda-de20-417e-a558-a0b2f702054e", 00:33:52.884 "is_configured": true, 00:33:52.884 "data_offset": 0, 00:33:52.884 "data_size": 65536 00:33:52.884 }, 00:33:52.884 { 00:33:52.884 "name": "BaseBdev2", 00:33:52.884 "uuid": "5ef97a49-70a2-4a3c-b470-00c550838da8", 00:33:52.884 "is_configured": true, 00:33:52.884 "data_offset": 0, 00:33:52.884 "data_size": 65536 00:33:52.884 }, 00:33:52.884 { 00:33:52.884 "name": "BaseBdev3", 00:33:52.884 "uuid": "975d9fe1-2f8f-435c-a6e9-d0dd104adbe6", 00:33:52.884 "is_configured": true, 00:33:52.884 "data_offset": 0, 00:33:52.884 "data_size": 65536 00:33:52.884 } 00:33:52.884 ] 00:33:52.884 } 00:33:52.884 } 00:33:52.884 }' 00:33:52.884 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:33:53.144 BaseBdev2 00:33:53.144 BaseBdev3' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.144 18:32:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.144 [2024-12-06 18:32:23.989033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:53.404 "name": "Existed_Raid", 00:33:53.404 "uuid": "95324d7f-45b5-46a3-bf03-058ebc0b2595", 00:33:53.404 "strip_size_kb": 64, 00:33:53.404 "state": "online", 00:33:53.404 "raid_level": "raid5f", 00:33:53.404 "superblock": false, 00:33:53.404 "num_base_bdevs": 3, 00:33:53.404 "num_base_bdevs_discovered": 2, 00:33:53.404 "num_base_bdevs_operational": 2, 00:33:53.404 "base_bdevs_list": [ 00:33:53.404 { 00:33:53.404 "name": null, 00:33:53.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:53.404 "is_configured": false, 00:33:53.404 "data_offset": 0, 00:33:53.404 "data_size": 65536 00:33:53.404 }, 00:33:53.404 { 00:33:53.404 "name": "BaseBdev2", 00:33:53.404 "uuid": "5ef97a49-70a2-4a3c-b470-00c550838da8", 00:33:53.404 "is_configured": true, 00:33:53.404 "data_offset": 0, 00:33:53.404 "data_size": 65536 00:33:53.404 }, 00:33:53.404 { 00:33:53.404 "name": "BaseBdev3", 00:33:53.404 "uuid": "975d9fe1-2f8f-435c-a6e9-d0dd104adbe6", 00:33:53.404 "is_configured": true, 00:33:53.404 "data_offset": 0, 00:33:53.404 "data_size": 65536 00:33:53.404 } 00:33:53.404 ] 00:33:53.404 }' 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:53.404 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.664 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.664 [2024-12-06 18:32:24.554656] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:53.664 [2024-12-06 18:32:24.554778] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:53.923 [2024-12-06 18:32:24.657540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.923 [2024-12-06 18:32:24.713480] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:53.923 [2024-12-06 18:32:24.713536] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.923 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.183 BaseBdev2 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.183 [ 00:33:54.183 { 00:33:54.183 "name": "BaseBdev2", 00:33:54.183 "aliases": [ 00:33:54.183 "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7" 00:33:54.183 ], 00:33:54.183 "product_name": "Malloc disk", 00:33:54.183 "block_size": 512, 00:33:54.183 "num_blocks": 65536, 00:33:54.183 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:54.183 "assigned_rate_limits": { 00:33:54.183 "rw_ios_per_sec": 0, 00:33:54.183 "rw_mbytes_per_sec": 0, 00:33:54.183 "r_mbytes_per_sec": 0, 00:33:54.183 "w_mbytes_per_sec": 0 00:33:54.183 }, 00:33:54.183 "claimed": false, 00:33:54.183 "zoned": false, 00:33:54.183 "supported_io_types": { 00:33:54.183 "read": true, 00:33:54.183 "write": true, 00:33:54.183 "unmap": true, 00:33:54.183 "flush": true, 00:33:54.183 "reset": true, 00:33:54.183 "nvme_admin": false, 00:33:54.183 "nvme_io": false, 00:33:54.183 "nvme_io_md": false, 00:33:54.183 "write_zeroes": true, 00:33:54.183 "zcopy": true, 00:33:54.183 "get_zone_info": false, 00:33:54.183 "zone_management": false, 00:33:54.183 "zone_append": false, 00:33:54.183 "compare": false, 00:33:54.183 "compare_and_write": false, 00:33:54.183 "abort": true, 00:33:54.183 "seek_hole": false, 00:33:54.183 "seek_data": false, 00:33:54.183 "copy": true, 00:33:54.183 "nvme_iov_md": false 00:33:54.183 }, 00:33:54.183 "memory_domains": [ 00:33:54.183 { 00:33:54.183 "dma_device_id": "system", 00:33:54.183 "dma_device_type": 1 00:33:54.183 }, 00:33:54.183 { 00:33:54.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:54.183 "dma_device_type": 2 00:33:54.183 } 00:33:54.183 ], 00:33:54.183 "driver_specific": {} 00:33:54.183 } 00:33:54.183 ] 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.183 BaseBdev3 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.183 18:32:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.183 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.183 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:33:54.183 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.183 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.183 [ 00:33:54.183 { 00:33:54.183 "name": "BaseBdev3", 00:33:54.183 "aliases": [ 00:33:54.183 "ede3a657-a009-464f-bcd5-4ccfcf12393a" 00:33:54.183 ], 00:33:54.183 "product_name": "Malloc disk", 00:33:54.183 "block_size": 512, 00:33:54.183 "num_blocks": 65536, 00:33:54.183 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:54.183 "assigned_rate_limits": { 00:33:54.183 "rw_ios_per_sec": 0, 00:33:54.183 "rw_mbytes_per_sec": 0, 00:33:54.183 "r_mbytes_per_sec": 0, 00:33:54.183 "w_mbytes_per_sec": 0 00:33:54.183 }, 00:33:54.183 "claimed": false, 00:33:54.183 "zoned": false, 00:33:54.183 "supported_io_types": { 00:33:54.183 "read": true, 00:33:54.183 "write": true, 00:33:54.183 "unmap": true, 00:33:54.183 "flush": true, 00:33:54.183 "reset": true, 00:33:54.183 "nvme_admin": false, 00:33:54.183 "nvme_io": false, 00:33:54.183 "nvme_io_md": false, 00:33:54.183 "write_zeroes": true, 00:33:54.183 "zcopy": true, 00:33:54.184 "get_zone_info": false, 00:33:54.184 "zone_management": false, 00:33:54.184 "zone_append": false, 00:33:54.184 "compare": false, 00:33:54.184 "compare_and_write": false, 00:33:54.184 "abort": true, 00:33:54.184 "seek_hole": false, 00:33:54.184 "seek_data": false, 00:33:54.184 "copy": true, 00:33:54.184 "nvme_iov_md": false 00:33:54.184 }, 00:33:54.184 "memory_domains": [ 00:33:54.184 { 00:33:54.184 "dma_device_id": "system", 00:33:54.184 "dma_device_type": 1 00:33:54.184 }, 00:33:54.184 { 00:33:54.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:54.184 "dma_device_type": 2 00:33:54.184 } 00:33:54.184 ], 00:33:54.184 "driver_specific": {} 00:33:54.184 } 00:33:54.184 ] 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.184 [2024-12-06 18:32:25.048437] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:33:54.184 [2024-12-06 18:32:25.048491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:33:54.184 [2024-12-06 18:32:25.048532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:54.184 [2024-12-06 18:32:25.050862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.184 "name": "Existed_Raid", 00:33:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.184 "strip_size_kb": 64, 00:33:54.184 "state": "configuring", 00:33:54.184 "raid_level": "raid5f", 00:33:54.184 "superblock": false, 00:33:54.184 "num_base_bdevs": 3, 00:33:54.184 "num_base_bdevs_discovered": 2, 00:33:54.184 "num_base_bdevs_operational": 3, 00:33:54.184 "base_bdevs_list": [ 00:33:54.184 { 00:33:54.184 "name": "BaseBdev1", 00:33:54.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.184 "is_configured": false, 00:33:54.184 "data_offset": 0, 00:33:54.184 "data_size": 0 00:33:54.184 }, 00:33:54.184 { 00:33:54.184 "name": "BaseBdev2", 00:33:54.184 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:54.184 "is_configured": true, 00:33:54.184 "data_offset": 0, 00:33:54.184 "data_size": 65536 00:33:54.184 }, 00:33:54.184 { 00:33:54.184 "name": "BaseBdev3", 00:33:54.184 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:54.184 "is_configured": true, 00:33:54.184 "data_offset": 0, 00:33:54.184 "data_size": 65536 00:33:54.184 } 00:33:54.184 ] 00:33:54.184 }' 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.184 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.752 [2024-12-06 18:32:25.475867] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:54.752 "name": "Existed_Raid", 00:33:54.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.752 "strip_size_kb": 64, 00:33:54.752 "state": "configuring", 00:33:54.752 "raid_level": "raid5f", 00:33:54.752 "superblock": false, 00:33:54.752 "num_base_bdevs": 3, 00:33:54.752 "num_base_bdevs_discovered": 1, 00:33:54.752 "num_base_bdevs_operational": 3, 00:33:54.752 "base_bdevs_list": [ 00:33:54.752 { 00:33:54.752 "name": "BaseBdev1", 00:33:54.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:54.752 "is_configured": false, 00:33:54.752 "data_offset": 0, 00:33:54.752 "data_size": 0 00:33:54.752 }, 00:33:54.752 { 00:33:54.752 "name": null, 00:33:54.752 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:54.752 "is_configured": false, 00:33:54.752 "data_offset": 0, 00:33:54.752 "data_size": 65536 00:33:54.752 }, 00:33:54.752 { 00:33:54.752 "name": "BaseBdev3", 00:33:54.752 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:54.752 "is_configured": true, 00:33:54.752 "data_offset": 0, 00:33:54.752 "data_size": 65536 00:33:54.752 } 00:33:54.752 ] 00:33:54.752 }' 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:54.752 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.011 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.271 [2024-12-06 18:32:25.971472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:33:55.271 BaseBdev1 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.271 18:32:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.271 [ 00:33:55.271 { 00:33:55.271 "name": "BaseBdev1", 00:33:55.271 "aliases": [ 00:33:55.271 "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa" 00:33:55.271 ], 00:33:55.271 "product_name": "Malloc disk", 00:33:55.271 "block_size": 512, 00:33:55.271 "num_blocks": 65536, 00:33:55.271 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:55.271 "assigned_rate_limits": { 00:33:55.271 "rw_ios_per_sec": 0, 00:33:55.271 "rw_mbytes_per_sec": 0, 00:33:55.271 "r_mbytes_per_sec": 0, 00:33:55.271 "w_mbytes_per_sec": 0 00:33:55.271 }, 00:33:55.271 "claimed": true, 00:33:55.271 "claim_type": "exclusive_write", 00:33:55.271 "zoned": false, 00:33:55.271 "supported_io_types": { 00:33:55.271 "read": true, 00:33:55.271 "write": true, 00:33:55.271 "unmap": true, 00:33:55.271 "flush": true, 00:33:55.271 "reset": true, 00:33:55.271 "nvme_admin": false, 00:33:55.271 "nvme_io": false, 00:33:55.271 "nvme_io_md": false, 00:33:55.271 "write_zeroes": true, 00:33:55.271 "zcopy": true, 00:33:55.271 "get_zone_info": false, 00:33:55.271 "zone_management": false, 00:33:55.271 "zone_append": false, 00:33:55.271 "compare": false, 00:33:55.271 "compare_and_write": false, 00:33:55.271 "abort": true, 00:33:55.271 "seek_hole": false, 00:33:55.271 "seek_data": false, 00:33:55.271 "copy": true, 00:33:55.271 "nvme_iov_md": false 00:33:55.271 }, 00:33:55.271 "memory_domains": [ 00:33:55.271 { 00:33:55.271 "dma_device_id": "system", 00:33:55.271 "dma_device_type": 1 00:33:55.271 }, 00:33:55.271 { 00:33:55.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:55.271 "dma_device_type": 2 00:33:55.271 } 00:33:55.271 ], 00:33:55.271 "driver_specific": {} 00:33:55.271 } 00:33:55.271 ] 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:55.271 "name": "Existed_Raid", 00:33:55.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.271 "strip_size_kb": 64, 00:33:55.271 "state": "configuring", 00:33:55.271 "raid_level": "raid5f", 00:33:55.271 "superblock": false, 00:33:55.271 "num_base_bdevs": 3, 00:33:55.271 "num_base_bdevs_discovered": 2, 00:33:55.271 "num_base_bdevs_operational": 3, 00:33:55.271 "base_bdevs_list": [ 00:33:55.271 { 00:33:55.271 "name": "BaseBdev1", 00:33:55.271 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:55.271 "is_configured": true, 00:33:55.271 "data_offset": 0, 00:33:55.271 "data_size": 65536 00:33:55.271 }, 00:33:55.271 { 00:33:55.271 "name": null, 00:33:55.271 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:55.271 "is_configured": false, 00:33:55.271 "data_offset": 0, 00:33:55.271 "data_size": 65536 00:33:55.271 }, 00:33:55.271 { 00:33:55.271 "name": "BaseBdev3", 00:33:55.271 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:55.271 "is_configured": true, 00:33:55.271 "data_offset": 0, 00:33:55.271 "data_size": 65536 00:33:55.271 } 00:33:55.271 ] 00:33:55.271 }' 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:55.271 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.530 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.788 [2024-12-06 18:32:26.478818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:55.788 "name": "Existed_Raid", 00:33:55.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:55.788 "strip_size_kb": 64, 00:33:55.788 "state": "configuring", 00:33:55.788 "raid_level": "raid5f", 00:33:55.788 "superblock": false, 00:33:55.788 "num_base_bdevs": 3, 00:33:55.788 "num_base_bdevs_discovered": 1, 00:33:55.788 "num_base_bdevs_operational": 3, 00:33:55.788 "base_bdevs_list": [ 00:33:55.788 { 00:33:55.788 "name": "BaseBdev1", 00:33:55.788 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:55.788 "is_configured": true, 00:33:55.788 "data_offset": 0, 00:33:55.788 "data_size": 65536 00:33:55.788 }, 00:33:55.788 { 00:33:55.788 "name": null, 00:33:55.788 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:55.788 "is_configured": false, 00:33:55.788 "data_offset": 0, 00:33:55.788 "data_size": 65536 00:33:55.788 }, 00:33:55.788 { 00:33:55.788 "name": null, 00:33:55.788 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:55.788 "is_configured": false, 00:33:55.788 "data_offset": 0, 00:33:55.788 "data_size": 65536 00:33:55.788 } 00:33:55.788 ] 00:33:55.788 }' 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:55.788 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.047 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.048 [2024-12-06 18:32:26.930714] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:56.048 "name": "Existed_Raid", 00:33:56.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.048 "strip_size_kb": 64, 00:33:56.048 "state": "configuring", 00:33:56.048 "raid_level": "raid5f", 00:33:56.048 "superblock": false, 00:33:56.048 "num_base_bdevs": 3, 00:33:56.048 "num_base_bdevs_discovered": 2, 00:33:56.048 "num_base_bdevs_operational": 3, 00:33:56.048 "base_bdevs_list": [ 00:33:56.048 { 00:33:56.048 "name": "BaseBdev1", 00:33:56.048 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:56.048 "is_configured": true, 00:33:56.048 "data_offset": 0, 00:33:56.048 "data_size": 65536 00:33:56.048 }, 00:33:56.048 { 00:33:56.048 "name": null, 00:33:56.048 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:56.048 "is_configured": false, 00:33:56.048 "data_offset": 0, 00:33:56.048 "data_size": 65536 00:33:56.048 }, 00:33:56.048 { 00:33:56.048 "name": "BaseBdev3", 00:33:56.048 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:56.048 "is_configured": true, 00:33:56.048 "data_offset": 0, 00:33:56.048 "data_size": 65536 00:33:56.048 } 00:33:56.048 ] 00:33:56.048 }' 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:56.048 18:32:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.615 [2024-12-06 18:32:27.402727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:56.615 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.616 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:56.616 "name": "Existed_Raid", 00:33:56.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:56.616 "strip_size_kb": 64, 00:33:56.616 "state": "configuring", 00:33:56.616 "raid_level": "raid5f", 00:33:56.616 "superblock": false, 00:33:56.616 "num_base_bdevs": 3, 00:33:56.616 "num_base_bdevs_discovered": 1, 00:33:56.616 "num_base_bdevs_operational": 3, 00:33:56.616 "base_bdevs_list": [ 00:33:56.616 { 00:33:56.616 "name": null, 00:33:56.616 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:56.616 "is_configured": false, 00:33:56.616 "data_offset": 0, 00:33:56.616 "data_size": 65536 00:33:56.616 }, 00:33:56.616 { 00:33:56.616 "name": null, 00:33:56.616 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:56.616 "is_configured": false, 00:33:56.616 "data_offset": 0, 00:33:56.616 "data_size": 65536 00:33:56.616 }, 00:33:56.616 { 00:33:56.616 "name": "BaseBdev3", 00:33:56.616 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:56.616 "is_configured": true, 00:33:56.616 "data_offset": 0, 00:33:56.616 "data_size": 65536 00:33:56.616 } 00:33:56.616 ] 00:33:56.616 }' 00:33:56.616 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:56.616 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.184 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.185 [2024-12-06 18:32:27.946712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:57.185 "name": "Existed_Raid", 00:33:57.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:33:57.185 "strip_size_kb": 64, 00:33:57.185 "state": "configuring", 00:33:57.185 "raid_level": "raid5f", 00:33:57.185 "superblock": false, 00:33:57.185 "num_base_bdevs": 3, 00:33:57.185 "num_base_bdevs_discovered": 2, 00:33:57.185 "num_base_bdevs_operational": 3, 00:33:57.185 "base_bdevs_list": [ 00:33:57.185 { 00:33:57.185 "name": null, 00:33:57.185 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:57.185 "is_configured": false, 00:33:57.185 "data_offset": 0, 00:33:57.185 "data_size": 65536 00:33:57.185 }, 00:33:57.185 { 00:33:57.185 "name": "BaseBdev2", 00:33:57.185 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:57.185 "is_configured": true, 00:33:57.185 "data_offset": 0, 00:33:57.185 "data_size": 65536 00:33:57.185 }, 00:33:57.185 { 00:33:57.185 "name": "BaseBdev3", 00:33:57.185 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:57.185 "is_configured": true, 00:33:57.185 "data_offset": 0, 00:33:57.185 "data_size": 65536 00:33:57.185 } 00:33:57.185 ] 00:33:57.185 }' 00:33:57.185 18:32:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:57.185 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.444 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.444 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:33:57.444 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.444 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.444 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.704 [2024-12-06 18:32:28.494205] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:33:57.704 [2024-12-06 18:32:28.494256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:33:57.704 [2024-12-06 18:32:28.494269] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:33:57.704 [2024-12-06 18:32:28.494586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:33:57.704 [2024-12-06 18:32:28.499973] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:33:57.704 [2024-12-06 18:32:28.499999] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:33:57.704 [2024-12-06 18:32:28.500329] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:33:57.704 NewBaseBdev 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.704 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.704 [ 00:33:57.704 { 00:33:57.704 "name": "NewBaseBdev", 00:33:57.704 "aliases": [ 00:33:57.704 "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa" 00:33:57.704 ], 00:33:57.704 "product_name": "Malloc disk", 00:33:57.704 "block_size": 512, 00:33:57.704 "num_blocks": 65536, 00:33:57.704 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:57.704 "assigned_rate_limits": { 00:33:57.704 "rw_ios_per_sec": 0, 00:33:57.704 "rw_mbytes_per_sec": 0, 00:33:57.704 "r_mbytes_per_sec": 0, 00:33:57.704 "w_mbytes_per_sec": 0 00:33:57.705 }, 00:33:57.705 "claimed": true, 00:33:57.705 "claim_type": "exclusive_write", 00:33:57.705 "zoned": false, 00:33:57.705 "supported_io_types": { 00:33:57.705 "read": true, 00:33:57.705 "write": true, 00:33:57.705 "unmap": true, 00:33:57.705 "flush": true, 00:33:57.705 "reset": true, 00:33:57.705 "nvme_admin": false, 00:33:57.705 "nvme_io": false, 00:33:57.705 "nvme_io_md": false, 00:33:57.705 "write_zeroes": true, 00:33:57.705 "zcopy": true, 00:33:57.705 "get_zone_info": false, 00:33:57.705 "zone_management": false, 00:33:57.705 "zone_append": false, 00:33:57.705 "compare": false, 00:33:57.705 "compare_and_write": false, 00:33:57.705 "abort": true, 00:33:57.705 "seek_hole": false, 00:33:57.705 "seek_data": false, 00:33:57.705 "copy": true, 00:33:57.705 "nvme_iov_md": false 00:33:57.705 }, 00:33:57.705 "memory_domains": [ 00:33:57.705 { 00:33:57.705 "dma_device_id": "system", 00:33:57.705 "dma_device_type": 1 00:33:57.705 }, 00:33:57.705 { 00:33:57.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:33:57.705 "dma_device_type": 2 00:33:57.705 } 00:33:57.705 ], 00:33:57.705 "driver_specific": {} 00:33:57.705 } 00:33:57.705 ] 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:33:57.705 "name": "Existed_Raid", 00:33:57.705 "uuid": "4c098c4c-5dcd-43d4-b150-de458fb75078", 00:33:57.705 "strip_size_kb": 64, 00:33:57.705 "state": "online", 00:33:57.705 "raid_level": "raid5f", 00:33:57.705 "superblock": false, 00:33:57.705 "num_base_bdevs": 3, 00:33:57.705 "num_base_bdevs_discovered": 3, 00:33:57.705 "num_base_bdevs_operational": 3, 00:33:57.705 "base_bdevs_list": [ 00:33:57.705 { 00:33:57.705 "name": "NewBaseBdev", 00:33:57.705 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:57.705 "is_configured": true, 00:33:57.705 "data_offset": 0, 00:33:57.705 "data_size": 65536 00:33:57.705 }, 00:33:57.705 { 00:33:57.705 "name": "BaseBdev2", 00:33:57.705 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:57.705 "is_configured": true, 00:33:57.705 "data_offset": 0, 00:33:57.705 "data_size": 65536 00:33:57.705 }, 00:33:57.705 { 00:33:57.705 "name": "BaseBdev3", 00:33:57.705 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:57.705 "is_configured": true, 00:33:57.705 "data_offset": 0, 00:33:57.705 "data_size": 65536 00:33:57.705 } 00:33:57.705 ] 00:33:57.705 }' 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:33:57.705 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.274 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.275 [2024-12-06 18:32:28.927167] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:33:58.275 18:32:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.275 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:33:58.275 "name": "Existed_Raid", 00:33:58.275 "aliases": [ 00:33:58.275 "4c098c4c-5dcd-43d4-b150-de458fb75078" 00:33:58.275 ], 00:33:58.275 "product_name": "Raid Volume", 00:33:58.275 "block_size": 512, 00:33:58.275 "num_blocks": 131072, 00:33:58.275 "uuid": "4c098c4c-5dcd-43d4-b150-de458fb75078", 00:33:58.275 "assigned_rate_limits": { 00:33:58.275 "rw_ios_per_sec": 0, 00:33:58.275 "rw_mbytes_per_sec": 0, 00:33:58.275 "r_mbytes_per_sec": 0, 00:33:58.275 "w_mbytes_per_sec": 0 00:33:58.275 }, 00:33:58.275 "claimed": false, 00:33:58.275 "zoned": false, 00:33:58.275 "supported_io_types": { 00:33:58.275 "read": true, 00:33:58.275 "write": true, 00:33:58.275 "unmap": false, 00:33:58.275 "flush": false, 00:33:58.275 "reset": true, 00:33:58.275 "nvme_admin": false, 00:33:58.275 "nvme_io": false, 00:33:58.275 "nvme_io_md": false, 00:33:58.275 "write_zeroes": true, 00:33:58.275 "zcopy": false, 00:33:58.275 "get_zone_info": false, 00:33:58.275 "zone_management": false, 00:33:58.275 "zone_append": false, 00:33:58.275 "compare": false, 00:33:58.275 "compare_and_write": false, 00:33:58.275 "abort": false, 00:33:58.275 "seek_hole": false, 00:33:58.275 "seek_data": false, 00:33:58.275 "copy": false, 00:33:58.275 "nvme_iov_md": false 00:33:58.275 }, 00:33:58.275 "driver_specific": { 00:33:58.275 "raid": { 00:33:58.275 "uuid": "4c098c4c-5dcd-43d4-b150-de458fb75078", 00:33:58.275 "strip_size_kb": 64, 00:33:58.275 "state": "online", 00:33:58.275 "raid_level": "raid5f", 00:33:58.275 "superblock": false, 00:33:58.275 "num_base_bdevs": 3, 00:33:58.275 "num_base_bdevs_discovered": 3, 00:33:58.275 "num_base_bdevs_operational": 3, 00:33:58.275 "base_bdevs_list": [ 00:33:58.275 { 00:33:58.275 "name": "NewBaseBdev", 00:33:58.275 "uuid": "1b9276e8-2fd7-4804-bb91-fbcbaa88f8fa", 00:33:58.275 "is_configured": true, 00:33:58.275 "data_offset": 0, 00:33:58.275 "data_size": 65536 00:33:58.275 }, 00:33:58.275 { 00:33:58.275 "name": "BaseBdev2", 00:33:58.275 "uuid": "a72b2b31-7b98-4a97-8c00-edcc1fbf95c7", 00:33:58.275 "is_configured": true, 00:33:58.275 "data_offset": 0, 00:33:58.275 "data_size": 65536 00:33:58.275 }, 00:33:58.275 { 00:33:58.275 "name": "BaseBdev3", 00:33:58.275 "uuid": "ede3a657-a009-464f-bcd5-4ccfcf12393a", 00:33:58.275 "is_configured": true, 00:33:58.275 "data_offset": 0, 00:33:58.275 "data_size": 65536 00:33:58.275 } 00:33:58.275 ] 00:33:58.275 } 00:33:58.275 } 00:33:58.275 }' 00:33:58.275 18:32:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:33:58.275 BaseBdev2 00:33:58.275 BaseBdev3' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:33:58.275 [2024-12-06 18:32:29.186690] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:33:58.275 [2024-12-06 18:32:29.186723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:33:58.275 [2024-12-06 18:32:29.186812] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:33:58.275 [2024-12-06 18:32:29.187170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:33:58.275 [2024-12-06 18:32:29.187188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 79632 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 79632 ']' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 79632 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:58.275 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79632 00:33:58.535 killing process with pid 79632 00:33:58.535 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:58.535 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:58.535 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79632' 00:33:58.535 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 79632 00:33:58.535 [2024-12-06 18:32:29.242183] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:33:58.535 18:32:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 79632 00:33:58.795 [2024-12-06 18:32:29.572793] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:34:00.175 00:34:00.175 real 0m10.419s 00:34:00.175 user 0m16.169s 00:34:00.175 sys 0m2.239s 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.175 ************************************ 00:34:00.175 END TEST raid5f_state_function_test 00:34:00.175 ************************************ 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:00.175 18:32:30 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:34:00.175 18:32:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:00.175 18:32:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:00.175 18:32:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:00.175 ************************************ 00:34:00.175 START TEST raid5f_state_function_test_sb 00:34:00.175 ************************************ 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80248 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:00.175 Process raid pid: 80248 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80248' 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80248 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80248 ']' 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:00.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:00.175 18:32:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.175 [2024-12-06 18:32:30.999579] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:34:00.175 [2024-12-06 18:32:31.000475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:00.435 [2024-12-06 18:32:31.181033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:00.435 [2024-12-06 18:32:31.326786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:00.694 [2024-12-06 18:32:31.564023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:00.694 [2024-12-06 18:32:31.564099] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.954 [2024-12-06 18:32:31.844702] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:00.954 [2024-12-06 18:32:31.844771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:00.954 [2024-12-06 18:32:31.844785] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:00.954 [2024-12-06 18:32:31.844799] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:00.954 [2024-12-06 18:32:31.844814] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:00.954 [2024-12-06 18:32:31.844827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:00.954 "name": "Existed_Raid", 00:34:00.954 "uuid": "b6ca057b-7544-4167-b424-f5ad41e94454", 00:34:00.954 "strip_size_kb": 64, 00:34:00.954 "state": "configuring", 00:34:00.954 "raid_level": "raid5f", 00:34:00.954 "superblock": true, 00:34:00.954 "num_base_bdevs": 3, 00:34:00.954 "num_base_bdevs_discovered": 0, 00:34:00.954 "num_base_bdevs_operational": 3, 00:34:00.954 "base_bdevs_list": [ 00:34:00.954 { 00:34:00.954 "name": "BaseBdev1", 00:34:00.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:00.954 "is_configured": false, 00:34:00.954 "data_offset": 0, 00:34:00.954 "data_size": 0 00:34:00.954 }, 00:34:00.954 { 00:34:00.954 "name": "BaseBdev2", 00:34:00.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:00.954 "is_configured": false, 00:34:00.954 "data_offset": 0, 00:34:00.954 "data_size": 0 00:34:00.954 }, 00:34:00.954 { 00:34:00.954 "name": "BaseBdev3", 00:34:00.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:00.954 "is_configured": false, 00:34:00.954 "data_offset": 0, 00:34:00.954 "data_size": 0 00:34:00.954 } 00:34:00.954 ] 00:34:00.954 }' 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:00.954 18:32:31 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 [2024-12-06 18:32:32.268072] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:01.523 [2024-12-06 18:32:32.268115] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 [2024-12-06 18:32:32.276074] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:01.523 [2024-12-06 18:32:32.276129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:01.523 [2024-12-06 18:32:32.276140] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:01.523 [2024-12-06 18:32:32.276176] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:01.523 [2024-12-06 18:32:32.276184] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:01.523 [2024-12-06 18:32:32.276197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 [2024-12-06 18:32:32.328036] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:01.523 BaseBdev1 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.523 [ 00:34:01.523 { 00:34:01.523 "name": "BaseBdev1", 00:34:01.523 "aliases": [ 00:34:01.523 "788426cf-dc3d-400b-81de-75961fc5321f" 00:34:01.523 ], 00:34:01.523 "product_name": "Malloc disk", 00:34:01.523 "block_size": 512, 00:34:01.523 "num_blocks": 65536, 00:34:01.523 "uuid": "788426cf-dc3d-400b-81de-75961fc5321f", 00:34:01.523 "assigned_rate_limits": { 00:34:01.523 "rw_ios_per_sec": 0, 00:34:01.523 "rw_mbytes_per_sec": 0, 00:34:01.523 "r_mbytes_per_sec": 0, 00:34:01.523 "w_mbytes_per_sec": 0 00:34:01.523 }, 00:34:01.523 "claimed": true, 00:34:01.523 "claim_type": "exclusive_write", 00:34:01.523 "zoned": false, 00:34:01.523 "supported_io_types": { 00:34:01.523 "read": true, 00:34:01.523 "write": true, 00:34:01.523 "unmap": true, 00:34:01.523 "flush": true, 00:34:01.523 "reset": true, 00:34:01.523 "nvme_admin": false, 00:34:01.523 "nvme_io": false, 00:34:01.523 "nvme_io_md": false, 00:34:01.523 "write_zeroes": true, 00:34:01.523 "zcopy": true, 00:34:01.523 "get_zone_info": false, 00:34:01.523 "zone_management": false, 00:34:01.523 "zone_append": false, 00:34:01.523 "compare": false, 00:34:01.523 "compare_and_write": false, 00:34:01.523 "abort": true, 00:34:01.523 "seek_hole": false, 00:34:01.523 "seek_data": false, 00:34:01.523 "copy": true, 00:34:01.523 "nvme_iov_md": false 00:34:01.523 }, 00:34:01.523 "memory_domains": [ 00:34:01.523 { 00:34:01.523 "dma_device_id": "system", 00:34:01.523 "dma_device_type": 1 00:34:01.523 }, 00:34:01.523 { 00:34:01.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:01.523 "dma_device_type": 2 00:34:01.523 } 00:34:01.523 ], 00:34:01.523 "driver_specific": {} 00:34:01.523 } 00:34:01.523 ] 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:01.523 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:01.524 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.524 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:01.524 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:01.524 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.524 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:01.524 "name": "Existed_Raid", 00:34:01.524 "uuid": "d322e6e0-02e4-45f6-8213-5b3ba34cf4fb", 00:34:01.524 "strip_size_kb": 64, 00:34:01.524 "state": "configuring", 00:34:01.524 "raid_level": "raid5f", 00:34:01.524 "superblock": true, 00:34:01.524 "num_base_bdevs": 3, 00:34:01.524 "num_base_bdevs_discovered": 1, 00:34:01.524 "num_base_bdevs_operational": 3, 00:34:01.524 "base_bdevs_list": [ 00:34:01.524 { 00:34:01.524 "name": "BaseBdev1", 00:34:01.524 "uuid": "788426cf-dc3d-400b-81de-75961fc5321f", 00:34:01.524 "is_configured": true, 00:34:01.524 "data_offset": 2048, 00:34:01.524 "data_size": 63488 00:34:01.524 }, 00:34:01.524 { 00:34:01.524 "name": "BaseBdev2", 00:34:01.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.524 "is_configured": false, 00:34:01.524 "data_offset": 0, 00:34:01.524 "data_size": 0 00:34:01.524 }, 00:34:01.524 { 00:34:01.524 "name": "BaseBdev3", 00:34:01.524 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:01.524 "is_configured": false, 00:34:01.524 "data_offset": 0, 00:34:01.524 "data_size": 0 00:34:01.524 } 00:34:01.524 ] 00:34:01.524 }' 00:34:01.524 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:01.524 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.092 [2024-12-06 18:32:32.803384] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:02.092 [2024-12-06 18:32:32.803431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.092 [2024-12-06 18:32:32.815445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:02.092 [2024-12-06 18:32:32.817820] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:02.092 [2024-12-06 18:32:32.817868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:02.092 [2024-12-06 18:32:32.817879] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:02.092 [2024-12-06 18:32:32.817892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.092 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:02.092 "name": "Existed_Raid", 00:34:02.092 "uuid": "02687919-a9f4-4356-95ce-fb54ef4960f0", 00:34:02.092 "strip_size_kb": 64, 00:34:02.092 "state": "configuring", 00:34:02.092 "raid_level": "raid5f", 00:34:02.092 "superblock": true, 00:34:02.092 "num_base_bdevs": 3, 00:34:02.092 "num_base_bdevs_discovered": 1, 00:34:02.092 "num_base_bdevs_operational": 3, 00:34:02.092 "base_bdevs_list": [ 00:34:02.092 { 00:34:02.092 "name": "BaseBdev1", 00:34:02.092 "uuid": "788426cf-dc3d-400b-81de-75961fc5321f", 00:34:02.092 "is_configured": true, 00:34:02.092 "data_offset": 2048, 00:34:02.092 "data_size": 63488 00:34:02.092 }, 00:34:02.092 { 00:34:02.092 "name": "BaseBdev2", 00:34:02.092 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.092 "is_configured": false, 00:34:02.092 "data_offset": 0, 00:34:02.092 "data_size": 0 00:34:02.092 }, 00:34:02.092 { 00:34:02.092 "name": "BaseBdev3", 00:34:02.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.093 "is_configured": false, 00:34:02.093 "data_offset": 0, 00:34:02.093 "data_size": 0 00:34:02.093 } 00:34:02.093 ] 00:34:02.093 }' 00:34:02.093 18:32:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:02.093 18:32:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.352 [2024-12-06 18:32:33.236279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:02.352 BaseBdev2 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.352 [ 00:34:02.352 { 00:34:02.352 "name": "BaseBdev2", 00:34:02.352 "aliases": [ 00:34:02.352 "c7ccdb7a-adee-40db-b9e2-f99a21014078" 00:34:02.352 ], 00:34:02.352 "product_name": "Malloc disk", 00:34:02.352 "block_size": 512, 00:34:02.352 "num_blocks": 65536, 00:34:02.352 "uuid": "c7ccdb7a-adee-40db-b9e2-f99a21014078", 00:34:02.352 "assigned_rate_limits": { 00:34:02.352 "rw_ios_per_sec": 0, 00:34:02.352 "rw_mbytes_per_sec": 0, 00:34:02.352 "r_mbytes_per_sec": 0, 00:34:02.352 "w_mbytes_per_sec": 0 00:34:02.352 }, 00:34:02.352 "claimed": true, 00:34:02.352 "claim_type": "exclusive_write", 00:34:02.352 "zoned": false, 00:34:02.352 "supported_io_types": { 00:34:02.352 "read": true, 00:34:02.352 "write": true, 00:34:02.352 "unmap": true, 00:34:02.352 "flush": true, 00:34:02.352 "reset": true, 00:34:02.352 "nvme_admin": false, 00:34:02.352 "nvme_io": false, 00:34:02.352 "nvme_io_md": false, 00:34:02.352 "write_zeroes": true, 00:34:02.352 "zcopy": true, 00:34:02.352 "get_zone_info": false, 00:34:02.352 "zone_management": false, 00:34:02.352 "zone_append": false, 00:34:02.352 "compare": false, 00:34:02.352 "compare_and_write": false, 00:34:02.352 "abort": true, 00:34:02.352 "seek_hole": false, 00:34:02.352 "seek_data": false, 00:34:02.352 "copy": true, 00:34:02.352 "nvme_iov_md": false 00:34:02.352 }, 00:34:02.352 "memory_domains": [ 00:34:02.352 { 00:34:02.352 "dma_device_id": "system", 00:34:02.352 "dma_device_type": 1 00:34:02.352 }, 00:34:02.352 { 00:34:02.352 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:02.352 "dma_device_type": 2 00:34:02.352 } 00:34:02.352 ], 00:34:02.352 "driver_specific": {} 00:34:02.352 } 00:34:02.352 ] 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:02.352 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.612 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.612 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:02.612 "name": "Existed_Raid", 00:34:02.612 "uuid": "02687919-a9f4-4356-95ce-fb54ef4960f0", 00:34:02.612 "strip_size_kb": 64, 00:34:02.612 "state": "configuring", 00:34:02.612 "raid_level": "raid5f", 00:34:02.612 "superblock": true, 00:34:02.612 "num_base_bdevs": 3, 00:34:02.612 "num_base_bdevs_discovered": 2, 00:34:02.612 "num_base_bdevs_operational": 3, 00:34:02.612 "base_bdevs_list": [ 00:34:02.612 { 00:34:02.612 "name": "BaseBdev1", 00:34:02.612 "uuid": "788426cf-dc3d-400b-81de-75961fc5321f", 00:34:02.612 "is_configured": true, 00:34:02.612 "data_offset": 2048, 00:34:02.612 "data_size": 63488 00:34:02.612 }, 00:34:02.612 { 00:34:02.612 "name": "BaseBdev2", 00:34:02.612 "uuid": "c7ccdb7a-adee-40db-b9e2-f99a21014078", 00:34:02.612 "is_configured": true, 00:34:02.612 "data_offset": 2048, 00:34:02.612 "data_size": 63488 00:34:02.612 }, 00:34:02.612 { 00:34:02.612 "name": "BaseBdev3", 00:34:02.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:02.612 "is_configured": false, 00:34:02.612 "data_offset": 0, 00:34:02.613 "data_size": 0 00:34:02.613 } 00:34:02.613 ] 00:34:02.613 }' 00:34:02.613 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:02.613 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.914 [2024-12-06 18:32:33.755060] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:02.914 [2024-12-06 18:32:33.755395] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:02.914 [2024-12-06 18:32:33.755422] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:02.914 BaseBdev3 00:34:02.914 [2024-12-06 18:32:33.755751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.914 [2024-12-06 18:32:33.761731] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:02.914 [2024-12-06 18:32:33.761754] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:02.914 [2024-12-06 18:32:33.762044] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.914 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.914 [ 00:34:02.914 { 00:34:02.914 "name": "BaseBdev3", 00:34:02.914 "aliases": [ 00:34:02.914 "b75d2b54-09a5-43b8-b191-c9ba05112ee5" 00:34:02.914 ], 00:34:02.914 "product_name": "Malloc disk", 00:34:02.914 "block_size": 512, 00:34:02.914 "num_blocks": 65536, 00:34:02.914 "uuid": "b75d2b54-09a5-43b8-b191-c9ba05112ee5", 00:34:02.914 "assigned_rate_limits": { 00:34:02.914 "rw_ios_per_sec": 0, 00:34:02.914 "rw_mbytes_per_sec": 0, 00:34:02.914 "r_mbytes_per_sec": 0, 00:34:02.914 "w_mbytes_per_sec": 0 00:34:02.914 }, 00:34:02.914 "claimed": true, 00:34:02.914 "claim_type": "exclusive_write", 00:34:02.914 "zoned": false, 00:34:02.914 "supported_io_types": { 00:34:02.914 "read": true, 00:34:02.914 "write": true, 00:34:02.914 "unmap": true, 00:34:02.914 "flush": true, 00:34:02.914 "reset": true, 00:34:02.914 "nvme_admin": false, 00:34:02.914 "nvme_io": false, 00:34:02.914 "nvme_io_md": false, 00:34:02.914 "write_zeroes": true, 00:34:02.914 "zcopy": true, 00:34:02.915 "get_zone_info": false, 00:34:02.915 "zone_management": false, 00:34:02.915 "zone_append": false, 00:34:02.915 "compare": false, 00:34:02.915 "compare_and_write": false, 00:34:02.915 "abort": true, 00:34:02.915 "seek_hole": false, 00:34:02.915 "seek_data": false, 00:34:02.915 "copy": true, 00:34:02.915 "nvme_iov_md": false 00:34:02.915 }, 00:34:02.915 "memory_domains": [ 00:34:02.915 { 00:34:02.915 "dma_device_id": "system", 00:34:02.915 "dma_device_type": 1 00:34:02.915 }, 00:34:02.915 { 00:34:02.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:02.915 "dma_device_type": 2 00:34:02.915 } 00:34:02.915 ], 00:34:02.915 "driver_specific": {} 00:34:02.915 } 00:34:02.915 ] 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:02.915 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.203 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:03.203 "name": "Existed_Raid", 00:34:03.203 "uuid": "02687919-a9f4-4356-95ce-fb54ef4960f0", 00:34:03.203 "strip_size_kb": 64, 00:34:03.203 "state": "online", 00:34:03.203 "raid_level": "raid5f", 00:34:03.203 "superblock": true, 00:34:03.203 "num_base_bdevs": 3, 00:34:03.203 "num_base_bdevs_discovered": 3, 00:34:03.203 "num_base_bdevs_operational": 3, 00:34:03.203 "base_bdevs_list": [ 00:34:03.203 { 00:34:03.203 "name": "BaseBdev1", 00:34:03.203 "uuid": "788426cf-dc3d-400b-81de-75961fc5321f", 00:34:03.203 "is_configured": true, 00:34:03.203 "data_offset": 2048, 00:34:03.203 "data_size": 63488 00:34:03.203 }, 00:34:03.203 { 00:34:03.203 "name": "BaseBdev2", 00:34:03.203 "uuid": "c7ccdb7a-adee-40db-b9e2-f99a21014078", 00:34:03.203 "is_configured": true, 00:34:03.203 "data_offset": 2048, 00:34:03.203 "data_size": 63488 00:34:03.203 }, 00:34:03.203 { 00:34:03.203 "name": "BaseBdev3", 00:34:03.203 "uuid": "b75d2b54-09a5-43b8-b191-c9ba05112ee5", 00:34:03.203 "is_configured": true, 00:34:03.203 "data_offset": 2048, 00:34:03.203 "data_size": 63488 00:34:03.203 } 00:34:03.203 ] 00:34:03.203 }' 00:34:03.203 18:32:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:03.203 18:32:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:03.462 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:03.463 [2024-12-06 18:32:34.224468] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:03.463 "name": "Existed_Raid", 00:34:03.463 "aliases": [ 00:34:03.463 "02687919-a9f4-4356-95ce-fb54ef4960f0" 00:34:03.463 ], 00:34:03.463 "product_name": "Raid Volume", 00:34:03.463 "block_size": 512, 00:34:03.463 "num_blocks": 126976, 00:34:03.463 "uuid": "02687919-a9f4-4356-95ce-fb54ef4960f0", 00:34:03.463 "assigned_rate_limits": { 00:34:03.463 "rw_ios_per_sec": 0, 00:34:03.463 "rw_mbytes_per_sec": 0, 00:34:03.463 "r_mbytes_per_sec": 0, 00:34:03.463 "w_mbytes_per_sec": 0 00:34:03.463 }, 00:34:03.463 "claimed": false, 00:34:03.463 "zoned": false, 00:34:03.463 "supported_io_types": { 00:34:03.463 "read": true, 00:34:03.463 "write": true, 00:34:03.463 "unmap": false, 00:34:03.463 "flush": false, 00:34:03.463 "reset": true, 00:34:03.463 "nvme_admin": false, 00:34:03.463 "nvme_io": false, 00:34:03.463 "nvme_io_md": false, 00:34:03.463 "write_zeroes": true, 00:34:03.463 "zcopy": false, 00:34:03.463 "get_zone_info": false, 00:34:03.463 "zone_management": false, 00:34:03.463 "zone_append": false, 00:34:03.463 "compare": false, 00:34:03.463 "compare_and_write": false, 00:34:03.463 "abort": false, 00:34:03.463 "seek_hole": false, 00:34:03.463 "seek_data": false, 00:34:03.463 "copy": false, 00:34:03.463 "nvme_iov_md": false 00:34:03.463 }, 00:34:03.463 "driver_specific": { 00:34:03.463 "raid": { 00:34:03.463 "uuid": "02687919-a9f4-4356-95ce-fb54ef4960f0", 00:34:03.463 "strip_size_kb": 64, 00:34:03.463 "state": "online", 00:34:03.463 "raid_level": "raid5f", 00:34:03.463 "superblock": true, 00:34:03.463 "num_base_bdevs": 3, 00:34:03.463 "num_base_bdevs_discovered": 3, 00:34:03.463 "num_base_bdevs_operational": 3, 00:34:03.463 "base_bdevs_list": [ 00:34:03.463 { 00:34:03.463 "name": "BaseBdev1", 00:34:03.463 "uuid": "788426cf-dc3d-400b-81de-75961fc5321f", 00:34:03.463 "is_configured": true, 00:34:03.463 "data_offset": 2048, 00:34:03.463 "data_size": 63488 00:34:03.463 }, 00:34:03.463 { 00:34:03.463 "name": "BaseBdev2", 00:34:03.463 "uuid": "c7ccdb7a-adee-40db-b9e2-f99a21014078", 00:34:03.463 "is_configured": true, 00:34:03.463 "data_offset": 2048, 00:34:03.463 "data_size": 63488 00:34:03.463 }, 00:34:03.463 { 00:34:03.463 "name": "BaseBdev3", 00:34:03.463 "uuid": "b75d2b54-09a5-43b8-b191-c9ba05112ee5", 00:34:03.463 "is_configured": true, 00:34:03.463 "data_offset": 2048, 00:34:03.463 "data_size": 63488 00:34:03.463 } 00:34:03.463 ] 00:34:03.463 } 00:34:03.463 } 00:34:03.463 }' 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:34:03.463 BaseBdev2 00:34:03.463 BaseBdev3' 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.463 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:03.723 [2024-12-06 18:32:34.507917] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.723 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:03.723 "name": "Existed_Raid", 00:34:03.723 "uuid": "02687919-a9f4-4356-95ce-fb54ef4960f0", 00:34:03.723 "strip_size_kb": 64, 00:34:03.723 "state": "online", 00:34:03.723 "raid_level": "raid5f", 00:34:03.723 "superblock": true, 00:34:03.723 "num_base_bdevs": 3, 00:34:03.723 "num_base_bdevs_discovered": 2, 00:34:03.723 "num_base_bdevs_operational": 2, 00:34:03.723 "base_bdevs_list": [ 00:34:03.723 { 00:34:03.723 "name": null, 00:34:03.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:03.723 "is_configured": false, 00:34:03.723 "data_offset": 0, 00:34:03.723 "data_size": 63488 00:34:03.723 }, 00:34:03.723 { 00:34:03.723 "name": "BaseBdev2", 00:34:03.724 "uuid": "c7ccdb7a-adee-40db-b9e2-f99a21014078", 00:34:03.724 "is_configured": true, 00:34:03.724 "data_offset": 2048, 00:34:03.724 "data_size": 63488 00:34:03.724 }, 00:34:03.724 { 00:34:03.724 "name": "BaseBdev3", 00:34:03.724 "uuid": "b75d2b54-09a5-43b8-b191-c9ba05112ee5", 00:34:03.724 "is_configured": true, 00:34:03.724 "data_offset": 2048, 00:34:03.724 "data_size": 63488 00:34:03.724 } 00:34:03.724 ] 00:34:03.724 }' 00:34:03.724 18:32:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:03.724 18:32:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:04.292 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.293 [2024-12-06 18:32:35.070752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:04.293 [2024-12-06 18:32:35.071089] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:04.293 [2024-12-06 18:32:35.171521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.293 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.293 [2024-12-06 18:32:35.227469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:04.293 [2024-12-06 18:32:35.227522] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.552 BaseBdev2 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.552 [ 00:34:04.552 { 00:34:04.552 "name": "BaseBdev2", 00:34:04.552 "aliases": [ 00:34:04.552 "a6219485-f3a1-4a72-97a7-71747663d7b7" 00:34:04.552 ], 00:34:04.552 "product_name": "Malloc disk", 00:34:04.552 "block_size": 512, 00:34:04.552 "num_blocks": 65536, 00:34:04.552 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:04.552 "assigned_rate_limits": { 00:34:04.552 "rw_ios_per_sec": 0, 00:34:04.552 "rw_mbytes_per_sec": 0, 00:34:04.552 "r_mbytes_per_sec": 0, 00:34:04.552 "w_mbytes_per_sec": 0 00:34:04.552 }, 00:34:04.552 "claimed": false, 00:34:04.552 "zoned": false, 00:34:04.552 "supported_io_types": { 00:34:04.552 "read": true, 00:34:04.552 "write": true, 00:34:04.552 "unmap": true, 00:34:04.552 "flush": true, 00:34:04.552 "reset": true, 00:34:04.552 "nvme_admin": false, 00:34:04.552 "nvme_io": false, 00:34:04.552 "nvme_io_md": false, 00:34:04.552 "write_zeroes": true, 00:34:04.552 "zcopy": true, 00:34:04.552 "get_zone_info": false, 00:34:04.552 "zone_management": false, 00:34:04.552 "zone_append": false, 00:34:04.552 "compare": false, 00:34:04.552 "compare_and_write": false, 00:34:04.552 "abort": true, 00:34:04.552 "seek_hole": false, 00:34:04.552 "seek_data": false, 00:34:04.552 "copy": true, 00:34:04.552 "nvme_iov_md": false 00:34:04.552 }, 00:34:04.552 "memory_domains": [ 00:34:04.552 { 00:34:04.552 "dma_device_id": "system", 00:34:04.552 "dma_device_type": 1 00:34:04.552 }, 00:34:04.552 { 00:34:04.552 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:04.552 "dma_device_type": 2 00:34:04.552 } 00:34:04.552 ], 00:34:04.552 "driver_specific": {} 00:34:04.552 } 00:34:04.552 ] 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.552 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.812 BaseBdev3 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.812 [ 00:34:04.812 { 00:34:04.812 "name": "BaseBdev3", 00:34:04.812 "aliases": [ 00:34:04.812 "264a1269-298f-49bd-a65b-3b82a84db23f" 00:34:04.812 ], 00:34:04.812 "product_name": "Malloc disk", 00:34:04.812 "block_size": 512, 00:34:04.812 "num_blocks": 65536, 00:34:04.812 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:04.812 "assigned_rate_limits": { 00:34:04.812 "rw_ios_per_sec": 0, 00:34:04.812 "rw_mbytes_per_sec": 0, 00:34:04.812 "r_mbytes_per_sec": 0, 00:34:04.812 "w_mbytes_per_sec": 0 00:34:04.812 }, 00:34:04.812 "claimed": false, 00:34:04.812 "zoned": false, 00:34:04.812 "supported_io_types": { 00:34:04.812 "read": true, 00:34:04.812 "write": true, 00:34:04.812 "unmap": true, 00:34:04.812 "flush": true, 00:34:04.812 "reset": true, 00:34:04.812 "nvme_admin": false, 00:34:04.812 "nvme_io": false, 00:34:04.812 "nvme_io_md": false, 00:34:04.812 "write_zeroes": true, 00:34:04.812 "zcopy": true, 00:34:04.812 "get_zone_info": false, 00:34:04.812 "zone_management": false, 00:34:04.812 "zone_append": false, 00:34:04.812 "compare": false, 00:34:04.812 "compare_and_write": false, 00:34:04.812 "abort": true, 00:34:04.812 "seek_hole": false, 00:34:04.812 "seek_data": false, 00:34:04.812 "copy": true, 00:34:04.812 "nvme_iov_md": false 00:34:04.812 }, 00:34:04.812 "memory_domains": [ 00:34:04.812 { 00:34:04.812 "dma_device_id": "system", 00:34:04.812 "dma_device_type": 1 00:34:04.812 }, 00:34:04.812 { 00:34:04.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:04.812 "dma_device_type": 2 00:34:04.812 } 00:34:04.812 ], 00:34:04.812 "driver_specific": {} 00:34:04.812 } 00:34:04.812 ] 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:34:04.812 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.813 [2024-12-06 18:32:35.576095] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:04.813 [2024-12-06 18:32:35.576301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:04.813 [2024-12-06 18:32:35.576460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:04.813 [2024-12-06 18:32:35.578835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:04.813 "name": "Existed_Raid", 00:34:04.813 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:04.813 "strip_size_kb": 64, 00:34:04.813 "state": "configuring", 00:34:04.813 "raid_level": "raid5f", 00:34:04.813 "superblock": true, 00:34:04.813 "num_base_bdevs": 3, 00:34:04.813 "num_base_bdevs_discovered": 2, 00:34:04.813 "num_base_bdevs_operational": 3, 00:34:04.813 "base_bdevs_list": [ 00:34:04.813 { 00:34:04.813 "name": "BaseBdev1", 00:34:04.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:04.813 "is_configured": false, 00:34:04.813 "data_offset": 0, 00:34:04.813 "data_size": 0 00:34:04.813 }, 00:34:04.813 { 00:34:04.813 "name": "BaseBdev2", 00:34:04.813 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:04.813 "is_configured": true, 00:34:04.813 "data_offset": 2048, 00:34:04.813 "data_size": 63488 00:34:04.813 }, 00:34:04.813 { 00:34:04.813 "name": "BaseBdev3", 00:34:04.813 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:04.813 "is_configured": true, 00:34:04.813 "data_offset": 2048, 00:34:04.813 "data_size": 63488 00:34:04.813 } 00:34:04.813 ] 00:34:04.813 }' 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:04.813 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.072 [2024-12-06 18:32:35.979493] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.072 18:32:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.072 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.330 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:05.330 "name": "Existed_Raid", 00:34:05.330 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:05.330 "strip_size_kb": 64, 00:34:05.330 "state": "configuring", 00:34:05.330 "raid_level": "raid5f", 00:34:05.330 "superblock": true, 00:34:05.330 "num_base_bdevs": 3, 00:34:05.330 "num_base_bdevs_discovered": 1, 00:34:05.330 "num_base_bdevs_operational": 3, 00:34:05.330 "base_bdevs_list": [ 00:34:05.330 { 00:34:05.330 "name": "BaseBdev1", 00:34:05.330 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:05.331 "is_configured": false, 00:34:05.331 "data_offset": 0, 00:34:05.331 "data_size": 0 00:34:05.331 }, 00:34:05.331 { 00:34:05.331 "name": null, 00:34:05.331 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:05.331 "is_configured": false, 00:34:05.331 "data_offset": 0, 00:34:05.331 "data_size": 63488 00:34:05.331 }, 00:34:05.331 { 00:34:05.331 "name": "BaseBdev3", 00:34:05.331 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:05.331 "is_configured": true, 00:34:05.331 "data_offset": 2048, 00:34:05.331 "data_size": 63488 00:34:05.331 } 00:34:05.331 ] 00:34:05.331 }' 00:34:05.331 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:05.331 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.590 [2024-12-06 18:32:36.481459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:05.590 BaseBdev1 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.590 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.590 [ 00:34:05.590 { 00:34:05.590 "name": "BaseBdev1", 00:34:05.591 "aliases": [ 00:34:05.591 "95f18ee2-bf67-4dbd-9c69-25607b2d2faa" 00:34:05.591 ], 00:34:05.591 "product_name": "Malloc disk", 00:34:05.591 "block_size": 512, 00:34:05.591 "num_blocks": 65536, 00:34:05.591 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:05.591 "assigned_rate_limits": { 00:34:05.591 "rw_ios_per_sec": 0, 00:34:05.591 "rw_mbytes_per_sec": 0, 00:34:05.591 "r_mbytes_per_sec": 0, 00:34:05.591 "w_mbytes_per_sec": 0 00:34:05.591 }, 00:34:05.591 "claimed": true, 00:34:05.591 "claim_type": "exclusive_write", 00:34:05.591 "zoned": false, 00:34:05.591 "supported_io_types": { 00:34:05.591 "read": true, 00:34:05.591 "write": true, 00:34:05.591 "unmap": true, 00:34:05.591 "flush": true, 00:34:05.591 "reset": true, 00:34:05.591 "nvme_admin": false, 00:34:05.591 "nvme_io": false, 00:34:05.591 "nvme_io_md": false, 00:34:05.591 "write_zeroes": true, 00:34:05.591 "zcopy": true, 00:34:05.591 "get_zone_info": false, 00:34:05.591 "zone_management": false, 00:34:05.591 "zone_append": false, 00:34:05.591 "compare": false, 00:34:05.591 "compare_and_write": false, 00:34:05.591 "abort": true, 00:34:05.591 "seek_hole": false, 00:34:05.591 "seek_data": false, 00:34:05.591 "copy": true, 00:34:05.591 "nvme_iov_md": false 00:34:05.591 }, 00:34:05.591 "memory_domains": [ 00:34:05.591 { 00:34:05.591 "dma_device_id": "system", 00:34:05.591 "dma_device_type": 1 00:34:05.591 }, 00:34:05.591 { 00:34:05.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:05.591 "dma_device_type": 2 00:34:05.591 } 00:34:05.591 ], 00:34:05.591 "driver_specific": {} 00:34:05.591 } 00:34:05.591 ] 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.591 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:05.850 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.850 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:05.850 "name": "Existed_Raid", 00:34:05.850 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:05.850 "strip_size_kb": 64, 00:34:05.850 "state": "configuring", 00:34:05.850 "raid_level": "raid5f", 00:34:05.850 "superblock": true, 00:34:05.850 "num_base_bdevs": 3, 00:34:05.850 "num_base_bdevs_discovered": 2, 00:34:05.850 "num_base_bdevs_operational": 3, 00:34:05.850 "base_bdevs_list": [ 00:34:05.850 { 00:34:05.850 "name": "BaseBdev1", 00:34:05.850 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:05.850 "is_configured": true, 00:34:05.850 "data_offset": 2048, 00:34:05.850 "data_size": 63488 00:34:05.850 }, 00:34:05.850 { 00:34:05.850 "name": null, 00:34:05.850 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:05.850 "is_configured": false, 00:34:05.850 "data_offset": 0, 00:34:05.850 "data_size": 63488 00:34:05.850 }, 00:34:05.850 { 00:34:05.850 "name": "BaseBdev3", 00:34:05.850 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:05.850 "is_configured": true, 00:34:05.850 "data_offset": 2048, 00:34:05.850 "data_size": 63488 00:34:05.850 } 00:34:05.850 ] 00:34:05.850 }' 00:34:05.850 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:05.850 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.108 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.108 [2024-12-06 18:32:36.976794] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.109 18:32:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:06.109 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.109 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:06.109 "name": "Existed_Raid", 00:34:06.109 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:06.109 "strip_size_kb": 64, 00:34:06.109 "state": "configuring", 00:34:06.109 "raid_level": "raid5f", 00:34:06.109 "superblock": true, 00:34:06.109 "num_base_bdevs": 3, 00:34:06.109 "num_base_bdevs_discovered": 1, 00:34:06.109 "num_base_bdevs_operational": 3, 00:34:06.109 "base_bdevs_list": [ 00:34:06.109 { 00:34:06.109 "name": "BaseBdev1", 00:34:06.109 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:06.109 "is_configured": true, 00:34:06.109 "data_offset": 2048, 00:34:06.109 "data_size": 63488 00:34:06.109 }, 00:34:06.109 { 00:34:06.109 "name": null, 00:34:06.109 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:06.109 "is_configured": false, 00:34:06.109 "data_offset": 0, 00:34:06.109 "data_size": 63488 00:34:06.109 }, 00:34:06.109 { 00:34:06.109 "name": null, 00:34:06.109 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:06.109 "is_configured": false, 00:34:06.109 "data_offset": 0, 00:34:06.109 "data_size": 63488 00:34:06.109 } 00:34:06.109 ] 00:34:06.109 }' 00:34:06.109 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:06.109 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.676 [2024-12-06 18:32:37.436196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:06.676 "name": "Existed_Raid", 00:34:06.676 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:06.676 "strip_size_kb": 64, 00:34:06.676 "state": "configuring", 00:34:06.676 "raid_level": "raid5f", 00:34:06.676 "superblock": true, 00:34:06.676 "num_base_bdevs": 3, 00:34:06.676 "num_base_bdevs_discovered": 2, 00:34:06.676 "num_base_bdevs_operational": 3, 00:34:06.676 "base_bdevs_list": [ 00:34:06.676 { 00:34:06.676 "name": "BaseBdev1", 00:34:06.676 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:06.676 "is_configured": true, 00:34:06.676 "data_offset": 2048, 00:34:06.676 "data_size": 63488 00:34:06.676 }, 00:34:06.676 { 00:34:06.676 "name": null, 00:34:06.676 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:06.676 "is_configured": false, 00:34:06.676 "data_offset": 0, 00:34:06.676 "data_size": 63488 00:34:06.676 }, 00:34:06.676 { 00:34:06.676 "name": "BaseBdev3", 00:34:06.676 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:06.676 "is_configured": true, 00:34:06.676 "data_offset": 2048, 00:34:06.676 "data_size": 63488 00:34:06.676 } 00:34:06.676 ] 00:34:06.676 }' 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:06.676 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.934 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:06.934 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:34:06.934 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.934 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.934 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.934 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:34:06.934 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:34:06.935 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.935 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:06.935 [2024-12-06 18:32:37.879596] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.193 18:32:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:07.193 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.193 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:07.193 "name": "Existed_Raid", 00:34:07.193 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:07.193 "strip_size_kb": 64, 00:34:07.193 "state": "configuring", 00:34:07.193 "raid_level": "raid5f", 00:34:07.193 "superblock": true, 00:34:07.193 "num_base_bdevs": 3, 00:34:07.193 "num_base_bdevs_discovered": 1, 00:34:07.193 "num_base_bdevs_operational": 3, 00:34:07.193 "base_bdevs_list": [ 00:34:07.193 { 00:34:07.193 "name": null, 00:34:07.193 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:07.193 "is_configured": false, 00:34:07.193 "data_offset": 0, 00:34:07.193 "data_size": 63488 00:34:07.193 }, 00:34:07.193 { 00:34:07.193 "name": null, 00:34:07.193 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:07.193 "is_configured": false, 00:34:07.193 "data_offset": 0, 00:34:07.193 "data_size": 63488 00:34:07.193 }, 00:34:07.193 { 00:34:07.193 "name": "BaseBdev3", 00:34:07.193 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:07.193 "is_configured": true, 00:34:07.193 "data_offset": 2048, 00:34:07.193 "data_size": 63488 00:34:07.193 } 00:34:07.193 ] 00:34:07.193 }' 00:34:07.193 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:07.193 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.451 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.451 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:34:07.451 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.451 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.451 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.709 [2024-12-06 18:32:38.404329] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.709 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.710 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.710 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:07.710 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.710 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:07.710 "name": "Existed_Raid", 00:34:07.710 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:07.710 "strip_size_kb": 64, 00:34:07.710 "state": "configuring", 00:34:07.710 "raid_level": "raid5f", 00:34:07.710 "superblock": true, 00:34:07.710 "num_base_bdevs": 3, 00:34:07.710 "num_base_bdevs_discovered": 2, 00:34:07.710 "num_base_bdevs_operational": 3, 00:34:07.710 "base_bdevs_list": [ 00:34:07.710 { 00:34:07.710 "name": null, 00:34:07.710 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:07.710 "is_configured": false, 00:34:07.710 "data_offset": 0, 00:34:07.710 "data_size": 63488 00:34:07.710 }, 00:34:07.710 { 00:34:07.710 "name": "BaseBdev2", 00:34:07.710 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:07.710 "is_configured": true, 00:34:07.710 "data_offset": 2048, 00:34:07.710 "data_size": 63488 00:34:07.710 }, 00:34:07.710 { 00:34:07.710 "name": "BaseBdev3", 00:34:07.710 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:07.710 "is_configured": true, 00:34:07.710 "data_offset": 2048, 00:34:07.710 "data_size": 63488 00:34:07.710 } 00:34:07.710 ] 00:34:07.710 }' 00:34:07.710 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:07.710 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.967 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 95f18ee2-bf67-4dbd-9c69-25607b2d2faa 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.968 [2024-12-06 18:32:38.903907] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:34:07.968 [2024-12-06 18:32:38.904195] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:07.968 [2024-12-06 18:32:38.904218] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:07.968 [2024-12-06 18:32:38.904502] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:34:07.968 NewBaseBdev 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.968 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:07.968 [2024-12-06 18:32:38.910215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:07.968 [2024-12-06 18:32:38.910366] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:34:07.968 [2024-12-06 18:32:38.910695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.226 [ 00:34:08.226 { 00:34:08.226 "name": "NewBaseBdev", 00:34:08.226 "aliases": [ 00:34:08.226 "95f18ee2-bf67-4dbd-9c69-25607b2d2faa" 00:34:08.226 ], 00:34:08.226 "product_name": "Malloc disk", 00:34:08.226 "block_size": 512, 00:34:08.226 "num_blocks": 65536, 00:34:08.226 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:08.226 "assigned_rate_limits": { 00:34:08.226 "rw_ios_per_sec": 0, 00:34:08.226 "rw_mbytes_per_sec": 0, 00:34:08.226 "r_mbytes_per_sec": 0, 00:34:08.226 "w_mbytes_per_sec": 0 00:34:08.226 }, 00:34:08.226 "claimed": true, 00:34:08.226 "claim_type": "exclusive_write", 00:34:08.226 "zoned": false, 00:34:08.226 "supported_io_types": { 00:34:08.226 "read": true, 00:34:08.226 "write": true, 00:34:08.226 "unmap": true, 00:34:08.226 "flush": true, 00:34:08.226 "reset": true, 00:34:08.226 "nvme_admin": false, 00:34:08.226 "nvme_io": false, 00:34:08.226 "nvme_io_md": false, 00:34:08.226 "write_zeroes": true, 00:34:08.226 "zcopy": true, 00:34:08.226 "get_zone_info": false, 00:34:08.226 "zone_management": false, 00:34:08.226 "zone_append": false, 00:34:08.226 "compare": false, 00:34:08.226 "compare_and_write": false, 00:34:08.226 "abort": true, 00:34:08.226 "seek_hole": false, 00:34:08.226 "seek_data": false, 00:34:08.226 "copy": true, 00:34:08.226 "nvme_iov_md": false 00:34:08.226 }, 00:34:08.226 "memory_domains": [ 00:34:08.226 { 00:34:08.226 "dma_device_id": "system", 00:34:08.226 "dma_device_type": 1 00:34:08.226 }, 00:34:08.226 { 00:34:08.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:08.226 "dma_device_type": 2 00:34:08.226 } 00:34:08.226 ], 00:34:08.226 "driver_specific": {} 00:34:08.226 } 00:34:08.226 ] 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:08.226 "name": "Existed_Raid", 00:34:08.226 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:08.226 "strip_size_kb": 64, 00:34:08.226 "state": "online", 00:34:08.226 "raid_level": "raid5f", 00:34:08.226 "superblock": true, 00:34:08.226 "num_base_bdevs": 3, 00:34:08.226 "num_base_bdevs_discovered": 3, 00:34:08.226 "num_base_bdevs_operational": 3, 00:34:08.226 "base_bdevs_list": [ 00:34:08.226 { 00:34:08.226 "name": "NewBaseBdev", 00:34:08.226 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:08.226 "is_configured": true, 00:34:08.226 "data_offset": 2048, 00:34:08.226 "data_size": 63488 00:34:08.226 }, 00:34:08.226 { 00:34:08.226 "name": "BaseBdev2", 00:34:08.226 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:08.226 "is_configured": true, 00:34:08.226 "data_offset": 2048, 00:34:08.226 "data_size": 63488 00:34:08.226 }, 00:34:08.226 { 00:34:08.226 "name": "BaseBdev3", 00:34:08.226 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:08.226 "is_configured": true, 00:34:08.226 "data_offset": 2048, 00:34:08.226 "data_size": 63488 00:34:08.226 } 00:34:08.226 ] 00:34:08.226 }' 00:34:08.226 18:32:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:08.226 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.484 [2024-12-06 18:32:39.373162] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:08.484 "name": "Existed_Raid", 00:34:08.484 "aliases": [ 00:34:08.484 "9dd7d515-5ece-440a-affd-e1d1d8ee08ed" 00:34:08.484 ], 00:34:08.484 "product_name": "Raid Volume", 00:34:08.484 "block_size": 512, 00:34:08.484 "num_blocks": 126976, 00:34:08.484 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:08.484 "assigned_rate_limits": { 00:34:08.484 "rw_ios_per_sec": 0, 00:34:08.484 "rw_mbytes_per_sec": 0, 00:34:08.484 "r_mbytes_per_sec": 0, 00:34:08.484 "w_mbytes_per_sec": 0 00:34:08.484 }, 00:34:08.484 "claimed": false, 00:34:08.484 "zoned": false, 00:34:08.484 "supported_io_types": { 00:34:08.484 "read": true, 00:34:08.484 "write": true, 00:34:08.484 "unmap": false, 00:34:08.484 "flush": false, 00:34:08.484 "reset": true, 00:34:08.484 "nvme_admin": false, 00:34:08.484 "nvme_io": false, 00:34:08.484 "nvme_io_md": false, 00:34:08.484 "write_zeroes": true, 00:34:08.484 "zcopy": false, 00:34:08.484 "get_zone_info": false, 00:34:08.484 "zone_management": false, 00:34:08.484 "zone_append": false, 00:34:08.484 "compare": false, 00:34:08.484 "compare_and_write": false, 00:34:08.484 "abort": false, 00:34:08.484 "seek_hole": false, 00:34:08.484 "seek_data": false, 00:34:08.484 "copy": false, 00:34:08.484 "nvme_iov_md": false 00:34:08.484 }, 00:34:08.484 "driver_specific": { 00:34:08.484 "raid": { 00:34:08.484 "uuid": "9dd7d515-5ece-440a-affd-e1d1d8ee08ed", 00:34:08.484 "strip_size_kb": 64, 00:34:08.484 "state": "online", 00:34:08.484 "raid_level": "raid5f", 00:34:08.484 "superblock": true, 00:34:08.484 "num_base_bdevs": 3, 00:34:08.484 "num_base_bdevs_discovered": 3, 00:34:08.484 "num_base_bdevs_operational": 3, 00:34:08.484 "base_bdevs_list": [ 00:34:08.484 { 00:34:08.484 "name": "NewBaseBdev", 00:34:08.484 "uuid": "95f18ee2-bf67-4dbd-9c69-25607b2d2faa", 00:34:08.484 "is_configured": true, 00:34:08.484 "data_offset": 2048, 00:34:08.484 "data_size": 63488 00:34:08.484 }, 00:34:08.484 { 00:34:08.484 "name": "BaseBdev2", 00:34:08.484 "uuid": "a6219485-f3a1-4a72-97a7-71747663d7b7", 00:34:08.484 "is_configured": true, 00:34:08.484 "data_offset": 2048, 00:34:08.484 "data_size": 63488 00:34:08.484 }, 00:34:08.484 { 00:34:08.484 "name": "BaseBdev3", 00:34:08.484 "uuid": "264a1269-298f-49bd-a65b-3b82a84db23f", 00:34:08.484 "is_configured": true, 00:34:08.484 "data_offset": 2048, 00:34:08.484 "data_size": 63488 00:34:08.484 } 00:34:08.484 ] 00:34:08.484 } 00:34:08.484 } 00:34:08.484 }' 00:34:08.484 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:34:08.743 BaseBdev2 00:34:08.743 BaseBdev3' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:08.743 [2024-12-06 18:32:39.636585] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:08.743 [2024-12-06 18:32:39.636611] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:08.743 [2024-12-06 18:32:39.636685] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:08.743 [2024-12-06 18:32:39.636998] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:08.743 [2024-12-06 18:32:39.637015] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80248 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80248 ']' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80248 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80248 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80248' 00:34:08.743 killing process with pid 80248 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80248 00:34:08.743 [2024-12-06 18:32:39.688960] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:08.743 18:32:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80248 00:34:09.311 [2024-12-06 18:32:40.012642] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:10.690 18:32:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:34:10.690 ************************************ 00:34:10.690 END TEST raid5f_state_function_test_sb 00:34:10.690 ************************************ 00:34:10.690 00:34:10.690 real 0m10.322s 00:34:10.690 user 0m16.069s 00:34:10.690 sys 0m2.231s 00:34:10.690 18:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:10.690 18:32:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:10.690 18:32:41 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:34:10.690 18:32:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:10.690 18:32:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.690 18:32:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:10.690 ************************************ 00:34:10.690 START TEST raid5f_superblock_test 00:34:10.690 ************************************ 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=80870 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 80870 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 80870 ']' 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.690 18:32:41 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:10.690 [2024-12-06 18:32:41.394704] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:34:10.690 [2024-12-06 18:32:41.394833] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80870 ] 00:34:10.690 [2024-12-06 18:32:41.577644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.950 [2024-12-06 18:32:41.706787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:11.211 [2024-12-06 18:32:41.945826] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:11.211 [2024-12-06 18:32:41.945878] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.471 malloc1 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.471 [2024-12-06 18:32:42.280569] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:11.471 [2024-12-06 18:32:42.280639] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.471 [2024-12-06 18:32:42.280669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:11.471 [2024-12-06 18:32:42.280681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.471 [2024-12-06 18:32:42.283429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.471 [2024-12-06 18:32:42.283471] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:11.471 pt1 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.471 malloc2 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.471 [2024-12-06 18:32:42.343247] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:11.471 [2024-12-06 18:32:42.343305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.471 [2024-12-06 18:32:42.343339] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:11.471 [2024-12-06 18:32:42.343352] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.471 [2024-12-06 18:32:42.345968] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.471 [2024-12-06 18:32:42.346006] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:11.471 pt2 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.471 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.472 malloc3 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.472 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.472 [2024-12-06 18:32:42.417166] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:11.472 [2024-12-06 18:32:42.417232] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:11.472 [2024-12-06 18:32:42.417259] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:11.472 [2024-12-06 18:32:42.417271] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:11.731 [2024-12-06 18:32:42.419930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:11.731 [2024-12-06 18:32:42.419971] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:11.731 pt3 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.731 [2024-12-06 18:32:42.429220] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:11.731 [2024-12-06 18:32:42.431563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:11.731 [2024-12-06 18:32:42.431637] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:11.731 [2024-12-06 18:32:42.431833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:11.731 [2024-12-06 18:32:42.431857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:11.731 [2024-12-06 18:32:42.432114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:34:11.731 [2024-12-06 18:32:42.437872] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:11.731 [2024-12-06 18:32:42.437896] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:11.731 [2024-12-06 18:32:42.438107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:11.731 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:11.732 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:11.732 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.732 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:11.732 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.732 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.732 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:11.732 "name": "raid_bdev1", 00:34:11.732 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:11.732 "strip_size_kb": 64, 00:34:11.732 "state": "online", 00:34:11.732 "raid_level": "raid5f", 00:34:11.732 "superblock": true, 00:34:11.732 "num_base_bdevs": 3, 00:34:11.732 "num_base_bdevs_discovered": 3, 00:34:11.732 "num_base_bdevs_operational": 3, 00:34:11.732 "base_bdevs_list": [ 00:34:11.732 { 00:34:11.732 "name": "pt1", 00:34:11.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:11.732 "is_configured": true, 00:34:11.732 "data_offset": 2048, 00:34:11.732 "data_size": 63488 00:34:11.732 }, 00:34:11.732 { 00:34:11.732 "name": "pt2", 00:34:11.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:11.732 "is_configured": true, 00:34:11.732 "data_offset": 2048, 00:34:11.732 "data_size": 63488 00:34:11.732 }, 00:34:11.732 { 00:34:11.732 "name": "pt3", 00:34:11.732 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:11.732 "is_configured": true, 00:34:11.732 "data_offset": 2048, 00:34:11.732 "data_size": 63488 00:34:11.732 } 00:34:11.732 ] 00:34:11.732 }' 00:34:11.732 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:11.732 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:11.991 [2024-12-06 18:32:42.896885] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.991 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:11.991 "name": "raid_bdev1", 00:34:11.991 "aliases": [ 00:34:11.991 "5365294c-30fa-4295-b527-b0be22095e54" 00:34:11.991 ], 00:34:11.991 "product_name": "Raid Volume", 00:34:11.991 "block_size": 512, 00:34:11.991 "num_blocks": 126976, 00:34:11.991 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:11.991 "assigned_rate_limits": { 00:34:11.991 "rw_ios_per_sec": 0, 00:34:11.991 "rw_mbytes_per_sec": 0, 00:34:11.991 "r_mbytes_per_sec": 0, 00:34:11.991 "w_mbytes_per_sec": 0 00:34:11.991 }, 00:34:11.991 "claimed": false, 00:34:11.991 "zoned": false, 00:34:11.991 "supported_io_types": { 00:34:11.991 "read": true, 00:34:11.991 "write": true, 00:34:11.991 "unmap": false, 00:34:11.991 "flush": false, 00:34:11.991 "reset": true, 00:34:11.991 "nvme_admin": false, 00:34:11.991 "nvme_io": false, 00:34:11.991 "nvme_io_md": false, 00:34:11.991 "write_zeroes": true, 00:34:11.991 "zcopy": false, 00:34:11.991 "get_zone_info": false, 00:34:11.991 "zone_management": false, 00:34:11.991 "zone_append": false, 00:34:11.991 "compare": false, 00:34:11.991 "compare_and_write": false, 00:34:11.991 "abort": false, 00:34:11.991 "seek_hole": false, 00:34:11.991 "seek_data": false, 00:34:11.991 "copy": false, 00:34:11.991 "nvme_iov_md": false 00:34:11.991 }, 00:34:11.991 "driver_specific": { 00:34:11.991 "raid": { 00:34:11.991 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:11.991 "strip_size_kb": 64, 00:34:11.991 "state": "online", 00:34:11.991 "raid_level": "raid5f", 00:34:11.991 "superblock": true, 00:34:11.991 "num_base_bdevs": 3, 00:34:11.991 "num_base_bdevs_discovered": 3, 00:34:11.991 "num_base_bdevs_operational": 3, 00:34:11.991 "base_bdevs_list": [ 00:34:11.991 { 00:34:11.991 "name": "pt1", 00:34:11.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:11.991 "is_configured": true, 00:34:11.991 "data_offset": 2048, 00:34:11.991 "data_size": 63488 00:34:11.991 }, 00:34:11.991 { 00:34:11.991 "name": "pt2", 00:34:11.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:11.991 "is_configured": true, 00:34:11.991 "data_offset": 2048, 00:34:11.991 "data_size": 63488 00:34:11.991 }, 00:34:11.991 { 00:34:11.991 "name": "pt3", 00:34:11.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:11.991 "is_configured": true, 00:34:11.991 "data_offset": 2048, 00:34:11.991 "data_size": 63488 00:34:11.991 } 00:34:11.991 ] 00:34:11.992 } 00:34:11.992 } 00:34:11.992 }' 00:34:11.992 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:12.264 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:12.264 pt2 00:34:12.264 pt3' 00:34:12.264 18:32:42 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.264 [2024-12-06 18:32:43.136547] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5365294c-30fa-4295-b527-b0be22095e54 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5365294c-30fa-4295-b527-b0be22095e54 ']' 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.264 [2024-12-06 18:32:43.192265] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:12.264 [2024-12-06 18:32:43.192300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:12.264 [2024-12-06 18:32:43.192391] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:12.264 [2024-12-06 18:32:43.192476] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:12.264 [2024-12-06 18:32:43.192489] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.264 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:12.540 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.541 [2024-12-06 18:32:43.332172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:34:12.541 [2024-12-06 18:32:43.334593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:34:12.541 [2024-12-06 18:32:43.334655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:34:12.541 [2024-12-06 18:32:43.334712] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:34:12.541 [2024-12-06 18:32:43.334765] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:34:12.541 [2024-12-06 18:32:43.334786] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:34:12.541 [2024-12-06 18:32:43.334808] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:12.541 [2024-12-06 18:32:43.334820] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:34:12.541 request: 00:34:12.541 { 00:34:12.541 "name": "raid_bdev1", 00:34:12.541 "raid_level": "raid5f", 00:34:12.541 "base_bdevs": [ 00:34:12.541 "malloc1", 00:34:12.541 "malloc2", 00:34:12.541 "malloc3" 00:34:12.541 ], 00:34:12.541 "strip_size_kb": 64, 00:34:12.541 "superblock": false, 00:34:12.541 "method": "bdev_raid_create", 00:34:12.541 "req_id": 1 00:34:12.541 } 00:34:12.541 Got JSON-RPC error response 00:34:12.541 response: 00:34:12.541 { 00:34:12.541 "code": -17, 00:34:12.541 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:34:12.541 } 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.541 [2024-12-06 18:32:43.392012] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:12.541 [2024-12-06 18:32:43.392063] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:12.541 [2024-12-06 18:32:43.392086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:34:12.541 [2024-12-06 18:32:43.392098] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:12.541 [2024-12-06 18:32:43.394820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:12.541 [2024-12-06 18:32:43.394859] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:12.541 [2024-12-06 18:32:43.394941] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:12.541 [2024-12-06 18:32:43.395002] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:12.541 pt1 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:12.541 "name": "raid_bdev1", 00:34:12.541 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:12.541 "strip_size_kb": 64, 00:34:12.541 "state": "configuring", 00:34:12.541 "raid_level": "raid5f", 00:34:12.541 "superblock": true, 00:34:12.541 "num_base_bdevs": 3, 00:34:12.541 "num_base_bdevs_discovered": 1, 00:34:12.541 "num_base_bdevs_operational": 3, 00:34:12.541 "base_bdevs_list": [ 00:34:12.541 { 00:34:12.541 "name": "pt1", 00:34:12.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:12.541 "is_configured": true, 00:34:12.541 "data_offset": 2048, 00:34:12.541 "data_size": 63488 00:34:12.541 }, 00:34:12.541 { 00:34:12.541 "name": null, 00:34:12.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:12.541 "is_configured": false, 00:34:12.541 "data_offset": 2048, 00:34:12.541 "data_size": 63488 00:34:12.541 }, 00:34:12.541 { 00:34:12.541 "name": null, 00:34:12.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:12.541 "is_configured": false, 00:34:12.541 "data_offset": 2048, 00:34:12.541 "data_size": 63488 00:34:12.541 } 00:34:12.541 ] 00:34:12.541 }' 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:12.541 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.110 [2024-12-06 18:32:43.795439] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:13.110 [2024-12-06 18:32:43.795495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:13.110 [2024-12-06 18:32:43.795520] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:13.110 [2024-12-06 18:32:43.795533] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:13.110 [2024-12-06 18:32:43.795973] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:13.110 [2024-12-06 18:32:43.796008] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:13.110 [2024-12-06 18:32:43.796094] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:13.110 [2024-12-06 18:32:43.796122] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:13.110 pt2 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.110 [2024-12-06 18:32:43.803433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.110 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.110 "name": "raid_bdev1", 00:34:13.110 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:13.110 "strip_size_kb": 64, 00:34:13.110 "state": "configuring", 00:34:13.110 "raid_level": "raid5f", 00:34:13.111 "superblock": true, 00:34:13.111 "num_base_bdevs": 3, 00:34:13.111 "num_base_bdevs_discovered": 1, 00:34:13.111 "num_base_bdevs_operational": 3, 00:34:13.111 "base_bdevs_list": [ 00:34:13.111 { 00:34:13.111 "name": "pt1", 00:34:13.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:13.111 "is_configured": true, 00:34:13.111 "data_offset": 2048, 00:34:13.111 "data_size": 63488 00:34:13.111 }, 00:34:13.111 { 00:34:13.111 "name": null, 00:34:13.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:13.111 "is_configured": false, 00:34:13.111 "data_offset": 0, 00:34:13.111 "data_size": 63488 00:34:13.111 }, 00:34:13.111 { 00:34:13.111 "name": null, 00:34:13.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:13.111 "is_configured": false, 00:34:13.111 "data_offset": 2048, 00:34:13.111 "data_size": 63488 00:34:13.111 } 00:34:13.111 ] 00:34:13.111 }' 00:34:13.111 18:32:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.111 18:32:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.370 [2024-12-06 18:32:44.226819] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:13.370 [2024-12-06 18:32:44.226877] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:13.370 [2024-12-06 18:32:44.226896] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:34:13.370 [2024-12-06 18:32:44.226910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:13.370 [2024-12-06 18:32:44.227385] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:13.370 [2024-12-06 18:32:44.227417] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:13.370 [2024-12-06 18:32:44.227487] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:13.370 [2024-12-06 18:32:44.227511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:13.370 pt2 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.370 [2024-12-06 18:32:44.238802] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:13.370 [2024-12-06 18:32:44.238851] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:13.370 [2024-12-06 18:32:44.238867] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:34:13.370 [2024-12-06 18:32:44.238881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:13.370 [2024-12-06 18:32:44.239271] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:13.370 [2024-12-06 18:32:44.239305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:13.370 [2024-12-06 18:32:44.239362] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:13.370 [2024-12-06 18:32:44.239384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:13.370 [2024-12-06 18:32:44.239516] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:13.370 [2024-12-06 18:32:44.239532] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:13.370 [2024-12-06 18:32:44.239796] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:13.370 [2024-12-06 18:32:44.245652] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:13.370 [2024-12-06 18:32:44.245676] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:34:13.370 [2024-12-06 18:32:44.245854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:13.370 pt3 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.370 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:13.370 "name": "raid_bdev1", 00:34:13.370 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:13.370 "strip_size_kb": 64, 00:34:13.370 "state": "online", 00:34:13.370 "raid_level": "raid5f", 00:34:13.370 "superblock": true, 00:34:13.370 "num_base_bdevs": 3, 00:34:13.370 "num_base_bdevs_discovered": 3, 00:34:13.370 "num_base_bdevs_operational": 3, 00:34:13.370 "base_bdevs_list": [ 00:34:13.370 { 00:34:13.370 "name": "pt1", 00:34:13.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:13.370 "is_configured": true, 00:34:13.370 "data_offset": 2048, 00:34:13.370 "data_size": 63488 00:34:13.370 }, 00:34:13.370 { 00:34:13.370 "name": "pt2", 00:34:13.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:13.370 "is_configured": true, 00:34:13.370 "data_offset": 2048, 00:34:13.370 "data_size": 63488 00:34:13.370 }, 00:34:13.370 { 00:34:13.370 "name": "pt3", 00:34:13.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:13.370 "is_configured": true, 00:34:13.371 "data_offset": 2048, 00:34:13.371 "data_size": 63488 00:34:13.371 } 00:34:13.371 ] 00:34:13.371 }' 00:34:13.371 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:13.371 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:34:13.938 [2024-12-06 18:32:44.668278] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.938 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:34:13.938 "name": "raid_bdev1", 00:34:13.938 "aliases": [ 00:34:13.938 "5365294c-30fa-4295-b527-b0be22095e54" 00:34:13.938 ], 00:34:13.938 "product_name": "Raid Volume", 00:34:13.938 "block_size": 512, 00:34:13.938 "num_blocks": 126976, 00:34:13.938 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:13.938 "assigned_rate_limits": { 00:34:13.938 "rw_ios_per_sec": 0, 00:34:13.938 "rw_mbytes_per_sec": 0, 00:34:13.938 "r_mbytes_per_sec": 0, 00:34:13.938 "w_mbytes_per_sec": 0 00:34:13.938 }, 00:34:13.938 "claimed": false, 00:34:13.938 "zoned": false, 00:34:13.938 "supported_io_types": { 00:34:13.938 "read": true, 00:34:13.938 "write": true, 00:34:13.938 "unmap": false, 00:34:13.938 "flush": false, 00:34:13.938 "reset": true, 00:34:13.938 "nvme_admin": false, 00:34:13.938 "nvme_io": false, 00:34:13.938 "nvme_io_md": false, 00:34:13.938 "write_zeroes": true, 00:34:13.938 "zcopy": false, 00:34:13.938 "get_zone_info": false, 00:34:13.938 "zone_management": false, 00:34:13.938 "zone_append": false, 00:34:13.938 "compare": false, 00:34:13.938 "compare_and_write": false, 00:34:13.938 "abort": false, 00:34:13.938 "seek_hole": false, 00:34:13.938 "seek_data": false, 00:34:13.938 "copy": false, 00:34:13.938 "nvme_iov_md": false 00:34:13.938 }, 00:34:13.938 "driver_specific": { 00:34:13.938 "raid": { 00:34:13.938 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:13.938 "strip_size_kb": 64, 00:34:13.938 "state": "online", 00:34:13.938 "raid_level": "raid5f", 00:34:13.938 "superblock": true, 00:34:13.938 "num_base_bdevs": 3, 00:34:13.938 "num_base_bdevs_discovered": 3, 00:34:13.938 "num_base_bdevs_operational": 3, 00:34:13.938 "base_bdevs_list": [ 00:34:13.938 { 00:34:13.938 "name": "pt1", 00:34:13.938 "uuid": "00000000-0000-0000-0000-000000000001", 00:34:13.938 "is_configured": true, 00:34:13.938 "data_offset": 2048, 00:34:13.938 "data_size": 63488 00:34:13.938 }, 00:34:13.938 { 00:34:13.938 "name": "pt2", 00:34:13.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:13.938 "is_configured": true, 00:34:13.938 "data_offset": 2048, 00:34:13.939 "data_size": 63488 00:34:13.939 }, 00:34:13.939 { 00:34:13.939 "name": "pt3", 00:34:13.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:13.939 "is_configured": true, 00:34:13.939 "data_offset": 2048, 00:34:13.939 "data_size": 63488 00:34:13.939 } 00:34:13.939 ] 00:34:13.939 } 00:34:13.939 } 00:34:13.939 }' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:34:13.939 pt2 00:34:13.939 pt3' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.939 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.198 [2024-12-06 18:32:44.919865] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5365294c-30fa-4295-b527-b0be22095e54 '!=' 5365294c-30fa-4295-b527-b0be22095e54 ']' 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.198 [2024-12-06 18:32:44.951692] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.198 18:32:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.198 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:14.198 "name": "raid_bdev1", 00:34:14.198 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:14.198 "strip_size_kb": 64, 00:34:14.198 "state": "online", 00:34:14.198 "raid_level": "raid5f", 00:34:14.198 "superblock": true, 00:34:14.198 "num_base_bdevs": 3, 00:34:14.198 "num_base_bdevs_discovered": 2, 00:34:14.198 "num_base_bdevs_operational": 2, 00:34:14.198 "base_bdevs_list": [ 00:34:14.198 { 00:34:14.198 "name": null, 00:34:14.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.198 "is_configured": false, 00:34:14.198 "data_offset": 0, 00:34:14.198 "data_size": 63488 00:34:14.198 }, 00:34:14.198 { 00:34:14.198 "name": "pt2", 00:34:14.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:14.198 "is_configured": true, 00:34:14.198 "data_offset": 2048, 00:34:14.198 "data_size": 63488 00:34:14.198 }, 00:34:14.198 { 00:34:14.198 "name": "pt3", 00:34:14.198 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:14.198 "is_configured": true, 00:34:14.198 "data_offset": 2048, 00:34:14.198 "data_size": 63488 00:34:14.198 } 00:34:14.198 ] 00:34:14.198 }' 00:34:14.198 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:14.198 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.456 [2024-12-06 18:32:45.383097] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:14.456 [2024-12-06 18:32:45.383126] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:14.456 [2024-12-06 18:32:45.383201] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:14.456 [2024-12-06 18:32:45.383261] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:14.456 [2024-12-06 18:32:45.383278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.456 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.714 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.714 [2024-12-06 18:32:45.462994] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:34:14.715 [2024-12-06 18:32:45.463050] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:14.715 [2024-12-06 18:32:45.463070] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:34:14.715 [2024-12-06 18:32:45.463084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:14.715 [2024-12-06 18:32:45.465773] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:14.715 [2024-12-06 18:32:45.465816] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:34:14.715 [2024-12-06 18:32:45.465889] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:34:14.715 [2024-12-06 18:32:45.465939] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:14.715 pt2 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:14.715 "name": "raid_bdev1", 00:34:14.715 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:14.715 "strip_size_kb": 64, 00:34:14.715 "state": "configuring", 00:34:14.715 "raid_level": "raid5f", 00:34:14.715 "superblock": true, 00:34:14.715 "num_base_bdevs": 3, 00:34:14.715 "num_base_bdevs_discovered": 1, 00:34:14.715 "num_base_bdevs_operational": 2, 00:34:14.715 "base_bdevs_list": [ 00:34:14.715 { 00:34:14.715 "name": null, 00:34:14.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.715 "is_configured": false, 00:34:14.715 "data_offset": 2048, 00:34:14.715 "data_size": 63488 00:34:14.715 }, 00:34:14.715 { 00:34:14.715 "name": "pt2", 00:34:14.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:14.715 "is_configured": true, 00:34:14.715 "data_offset": 2048, 00:34:14.715 "data_size": 63488 00:34:14.715 }, 00:34:14.715 { 00:34:14.715 "name": null, 00:34:14.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:14.715 "is_configured": false, 00:34:14.715 "data_offset": 2048, 00:34:14.715 "data_size": 63488 00:34:14.715 } 00:34:14.715 ] 00:34:14.715 }' 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:14.715 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.973 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:34:14.973 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:34:14.973 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:34:14.973 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:14.973 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.973 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.973 [2024-12-06 18:32:45.850466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:14.973 [2024-12-06 18:32:45.850531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:14.973 [2024-12-06 18:32:45.850553] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:34:14.973 [2024-12-06 18:32:45.850568] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:14.973 [2024-12-06 18:32:45.851057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:14.973 [2024-12-06 18:32:45.851088] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:14.973 [2024-12-06 18:32:45.851173] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:14.973 [2024-12-06 18:32:45.851203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:14.973 [2024-12-06 18:32:45.851321] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:34:14.973 [2024-12-06 18:32:45.851344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:14.973 [2024-12-06 18:32:45.851636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:34:14.973 [2024-12-06 18:32:45.857310] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:34:14.974 [2024-12-06 18:32:45.857335] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:34:14.974 [2024-12-06 18:32:45.857667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:14.974 pt3 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:14.974 "name": "raid_bdev1", 00:34:14.974 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:14.974 "strip_size_kb": 64, 00:34:14.974 "state": "online", 00:34:14.974 "raid_level": "raid5f", 00:34:14.974 "superblock": true, 00:34:14.974 "num_base_bdevs": 3, 00:34:14.974 "num_base_bdevs_discovered": 2, 00:34:14.974 "num_base_bdevs_operational": 2, 00:34:14.974 "base_bdevs_list": [ 00:34:14.974 { 00:34:14.974 "name": null, 00:34:14.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:14.974 "is_configured": false, 00:34:14.974 "data_offset": 2048, 00:34:14.974 "data_size": 63488 00:34:14.974 }, 00:34:14.974 { 00:34:14.974 "name": "pt2", 00:34:14.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:14.974 "is_configured": true, 00:34:14.974 "data_offset": 2048, 00:34:14.974 "data_size": 63488 00:34:14.974 }, 00:34:14.974 { 00:34:14.974 "name": "pt3", 00:34:14.974 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:14.974 "is_configured": true, 00:34:14.974 "data_offset": 2048, 00:34:14.974 "data_size": 63488 00:34:14.974 } 00:34:14.974 ] 00:34:14.974 }' 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:14.974 18:32:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.540 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:15.540 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.540 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.540 [2024-12-06 18:32:46.263889] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:15.540 [2024-12-06 18:32:46.263922] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:15.540 [2024-12-06 18:32:46.263986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:15.541 [2024-12-06 18:32:46.264048] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:15.541 [2024-12-06 18:32:46.264059] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.541 [2024-12-06 18:32:46.331837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:34:15.541 [2024-12-06 18:32:46.331894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:15.541 [2024-12-06 18:32:46.331915] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:15.541 [2024-12-06 18:32:46.331928] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:15.541 [2024-12-06 18:32:46.334779] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:15.541 [2024-12-06 18:32:46.334819] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:34:15.541 [2024-12-06 18:32:46.334896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:34:15.541 [2024-12-06 18:32:46.334945] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:34:15.541 [2024-12-06 18:32:46.335101] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:34:15.541 [2024-12-06 18:32:46.335115] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:15.541 [2024-12-06 18:32:46.335133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:34:15.541 [2024-12-06 18:32:46.335207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:34:15.541 pt1 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:15.541 "name": "raid_bdev1", 00:34:15.541 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:15.541 "strip_size_kb": 64, 00:34:15.541 "state": "configuring", 00:34:15.541 "raid_level": "raid5f", 00:34:15.541 "superblock": true, 00:34:15.541 "num_base_bdevs": 3, 00:34:15.541 "num_base_bdevs_discovered": 1, 00:34:15.541 "num_base_bdevs_operational": 2, 00:34:15.541 "base_bdevs_list": [ 00:34:15.541 { 00:34:15.541 "name": null, 00:34:15.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:15.541 "is_configured": false, 00:34:15.541 "data_offset": 2048, 00:34:15.541 "data_size": 63488 00:34:15.541 }, 00:34:15.541 { 00:34:15.541 "name": "pt2", 00:34:15.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:15.541 "is_configured": true, 00:34:15.541 "data_offset": 2048, 00:34:15.541 "data_size": 63488 00:34:15.541 }, 00:34:15.541 { 00:34:15.541 "name": null, 00:34:15.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:15.541 "is_configured": false, 00:34:15.541 "data_offset": 2048, 00:34:15.541 "data_size": 63488 00:34:15.541 } 00:34:15.541 ] 00:34:15.541 }' 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:15.541 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.800 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:34:15.800 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.800 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:15.800 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.059 [2024-12-06 18:32:46.787182] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:34:16.059 [2024-12-06 18:32:46.787238] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:16.059 [2024-12-06 18:32:46.787260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:34:16.059 [2024-12-06 18:32:46.787273] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:16.059 [2024-12-06 18:32:46.787752] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:16.059 [2024-12-06 18:32:46.787782] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:34:16.059 [2024-12-06 18:32:46.787859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:34:16.059 [2024-12-06 18:32:46.787882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:34:16.059 [2024-12-06 18:32:46.788007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:34:16.059 [2024-12-06 18:32:46.788019] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:16.059 [2024-12-06 18:32:46.788320] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:16.059 [2024-12-06 18:32:46.794572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:34:16.059 [2024-12-06 18:32:46.794612] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:34:16.059 [2024-12-06 18:32:46.794870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:16.059 pt3 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:16.059 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:16.060 "name": "raid_bdev1", 00:34:16.060 "uuid": "5365294c-30fa-4295-b527-b0be22095e54", 00:34:16.060 "strip_size_kb": 64, 00:34:16.060 "state": "online", 00:34:16.060 "raid_level": "raid5f", 00:34:16.060 "superblock": true, 00:34:16.060 "num_base_bdevs": 3, 00:34:16.060 "num_base_bdevs_discovered": 2, 00:34:16.060 "num_base_bdevs_operational": 2, 00:34:16.060 "base_bdevs_list": [ 00:34:16.060 { 00:34:16.060 "name": null, 00:34:16.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:16.060 "is_configured": false, 00:34:16.060 "data_offset": 2048, 00:34:16.060 "data_size": 63488 00:34:16.060 }, 00:34:16.060 { 00:34:16.060 "name": "pt2", 00:34:16.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:34:16.060 "is_configured": true, 00:34:16.060 "data_offset": 2048, 00:34:16.060 "data_size": 63488 00:34:16.060 }, 00:34:16.060 { 00:34:16.060 "name": "pt3", 00:34:16.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:34:16.060 "is_configured": true, 00:34:16.060 "data_offset": 2048, 00:34:16.060 "data_size": 63488 00:34:16.060 } 00:34:16.060 ] 00:34:16.060 }' 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:16.060 18:32:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.318 18:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:34:16.318 18:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:34:16.318 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.318 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.318 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.318 18:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:34:16.319 18:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:16.319 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.319 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:16.319 18:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:34:16.319 [2024-12-06 18:32:47.233810] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:16.319 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 5365294c-30fa-4295-b527-b0be22095e54 '!=' 5365294c-30fa-4295-b527-b0be22095e54 ']' 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 80870 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 80870 ']' 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 80870 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80870 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:16.578 killing process with pid 80870 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80870' 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 80870 00:34:16.578 [2024-12-06 18:32:47.311584] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:16.578 [2024-12-06 18:32:47.311676] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:16.578 18:32:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 80870 00:34:16.578 [2024-12-06 18:32:47.311738] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:16.578 [2024-12-06 18:32:47.311754] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:34:16.837 [2024-12-06 18:32:47.632011] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:18.216 18:32:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:34:18.216 00:34:18.216 real 0m7.530s 00:34:18.216 user 0m11.544s 00:34:18.216 sys 0m1.636s 00:34:18.216 18:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:18.216 18:32:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.216 ************************************ 00:34:18.216 END TEST raid5f_superblock_test 00:34:18.216 ************************************ 00:34:18.216 18:32:48 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:34:18.216 18:32:48 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:34:18.216 18:32:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:18.216 18:32:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:18.216 18:32:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:18.216 ************************************ 00:34:18.216 START TEST raid5f_rebuild_test 00:34:18.216 ************************************ 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81303 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81303 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81303 ']' 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:18.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:18.216 18:32:48 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.216 [2024-12-06 18:32:49.020903] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:34:18.216 [2024-12-06 18:32:49.021021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81303 ] 00:34:18.216 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:18.216 Zero copy mechanism will not be used. 00:34:18.475 [2024-12-06 18:32:49.201008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:18.475 [2024-12-06 18:32:49.331579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:18.734 [2024-12-06 18:32:49.561461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:18.734 [2024-12-06 18:32:49.561525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.994 BaseBdev1_malloc 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:18.994 [2024-12-06 18:32:49.897840] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:18.994 [2024-12-06 18:32:49.897915] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:18.994 [2024-12-06 18:32:49.897942] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:18.994 [2024-12-06 18:32:49.897958] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:18.994 [2024-12-06 18:32:49.900695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:18.994 [2024-12-06 18:32:49.900741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:18.994 BaseBdev1 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.994 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.253 BaseBdev2_malloc 00:34:19.253 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.253 18:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:19.253 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.253 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.253 [2024-12-06 18:32:49.960602] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:19.253 [2024-12-06 18:32:49.960673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:19.253 [2024-12-06 18:32:49.960700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:19.253 [2024-12-06 18:32:49.960716] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:19.253 [2024-12-06 18:32:49.963420] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:19.254 [2024-12-06 18:32:49.963589] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:19.254 BaseBdev2 00:34:19.254 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.254 18:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:19.254 18:32:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:19.254 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.254 18:32:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 BaseBdev3_malloc 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 [2024-12-06 18:32:50.036368] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:19.254 [2024-12-06 18:32:50.036440] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:19.254 [2024-12-06 18:32:50.036468] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:19.254 [2024-12-06 18:32:50.036484] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:19.254 [2024-12-06 18:32:50.040028] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:19.254 [2024-12-06 18:32:50.040076] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:19.254 BaseBdev3 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 spare_malloc 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 spare_delay 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 [2024-12-06 18:32:50.111356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:19.254 [2024-12-06 18:32:50.111537] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:19.254 [2024-12-06 18:32:50.111567] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:19.254 [2024-12-06 18:32:50.111583] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:19.254 [2024-12-06 18:32:50.114281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:19.254 [2024-12-06 18:32:50.114325] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:19.254 spare 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 [2024-12-06 18:32:50.123410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:19.254 [2024-12-06 18:32:50.125763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:19.254 [2024-12-06 18:32:50.125956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:19.254 [2024-12-06 18:32:50.126063] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:19.254 [2024-12-06 18:32:50.126078] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:34:19.254 [2024-12-06 18:32:50.126387] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:19.254 [2024-12-06 18:32:50.132208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:19.254 [2024-12-06 18:32:50.132322] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:19.254 [2024-12-06 18:32:50.132602] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:19.254 "name": "raid_bdev1", 00:34:19.254 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:19.254 "strip_size_kb": 64, 00:34:19.254 "state": "online", 00:34:19.254 "raid_level": "raid5f", 00:34:19.254 "superblock": false, 00:34:19.254 "num_base_bdevs": 3, 00:34:19.254 "num_base_bdevs_discovered": 3, 00:34:19.254 "num_base_bdevs_operational": 3, 00:34:19.254 "base_bdevs_list": [ 00:34:19.254 { 00:34:19.254 "name": "BaseBdev1", 00:34:19.254 "uuid": "9444936b-929f-578e-a7a6-748e48a9a460", 00:34:19.254 "is_configured": true, 00:34:19.254 "data_offset": 0, 00:34:19.254 "data_size": 65536 00:34:19.254 }, 00:34:19.254 { 00:34:19.254 "name": "BaseBdev2", 00:34:19.254 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:19.254 "is_configured": true, 00:34:19.254 "data_offset": 0, 00:34:19.254 "data_size": 65536 00:34:19.254 }, 00:34:19.254 { 00:34:19.254 "name": "BaseBdev3", 00:34:19.254 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:19.254 "is_configured": true, 00:34:19.254 "data_offset": 0, 00:34:19.254 "data_size": 65536 00:34:19.254 } 00:34:19.254 ] 00:34:19.254 }' 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:19.254 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.822 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:19.823 [2024-12-06 18:32:50.499938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:19.823 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:20.082 [2024-12-06 18:32:50.783429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:20.082 /dev/nbd0 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:20.082 1+0 records in 00:34:20.082 1+0 records out 00:34:20.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442166 s, 9.3 MB/s 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:34:20.082 18:32:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:34:20.342 512+0 records in 00:34:20.342 512+0 records out 00:34:20.342 67108864 bytes (67 MB, 64 MiB) copied, 0.396458 s, 169 MB/s 00:34:20.342 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:20.342 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:20.342 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:20.342 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:20.342 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:20.342 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:20.342 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:20.602 [2024-12-06 18:32:51.482840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.602 [2024-12-06 18:32:51.498398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:20.602 "name": "raid_bdev1", 00:34:20.602 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:20.602 "strip_size_kb": 64, 00:34:20.602 "state": "online", 00:34:20.602 "raid_level": "raid5f", 00:34:20.602 "superblock": false, 00:34:20.602 "num_base_bdevs": 3, 00:34:20.602 "num_base_bdevs_discovered": 2, 00:34:20.602 "num_base_bdevs_operational": 2, 00:34:20.602 "base_bdevs_list": [ 00:34:20.602 { 00:34:20.602 "name": null, 00:34:20.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:20.602 "is_configured": false, 00:34:20.602 "data_offset": 0, 00:34:20.602 "data_size": 65536 00:34:20.602 }, 00:34:20.602 { 00:34:20.602 "name": "BaseBdev2", 00:34:20.602 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:20.602 "is_configured": true, 00:34:20.602 "data_offset": 0, 00:34:20.602 "data_size": 65536 00:34:20.602 }, 00:34:20.602 { 00:34:20.602 "name": "BaseBdev3", 00:34:20.602 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:20.602 "is_configured": true, 00:34:20.602 "data_offset": 0, 00:34:20.602 "data_size": 65536 00:34:20.602 } 00:34:20.602 ] 00:34:20.602 }' 00:34:20.602 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:20.861 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.120 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:21.120 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.120 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:21.120 [2024-12-06 18:32:51.917893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:21.120 [2024-12-06 18:32:51.934728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:34:21.120 18:32:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.120 18:32:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:21.120 [2024-12-06 18:32:51.943039] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:22.055 "name": "raid_bdev1", 00:34:22.055 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:22.055 "strip_size_kb": 64, 00:34:22.055 "state": "online", 00:34:22.055 "raid_level": "raid5f", 00:34:22.055 "superblock": false, 00:34:22.055 "num_base_bdevs": 3, 00:34:22.055 "num_base_bdevs_discovered": 3, 00:34:22.055 "num_base_bdevs_operational": 3, 00:34:22.055 "process": { 00:34:22.055 "type": "rebuild", 00:34:22.055 "target": "spare", 00:34:22.055 "progress": { 00:34:22.055 "blocks": 20480, 00:34:22.055 "percent": 15 00:34:22.055 } 00:34:22.055 }, 00:34:22.055 "base_bdevs_list": [ 00:34:22.055 { 00:34:22.055 "name": "spare", 00:34:22.055 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:22.055 "is_configured": true, 00:34:22.055 "data_offset": 0, 00:34:22.055 "data_size": 65536 00:34:22.055 }, 00:34:22.055 { 00:34:22.055 "name": "BaseBdev2", 00:34:22.055 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:22.055 "is_configured": true, 00:34:22.055 "data_offset": 0, 00:34:22.055 "data_size": 65536 00:34:22.055 }, 00:34:22.055 { 00:34:22.055 "name": "BaseBdev3", 00:34:22.055 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:22.055 "is_configured": true, 00:34:22.055 "data_offset": 0, 00:34:22.055 "data_size": 65536 00:34:22.055 } 00:34:22.055 ] 00:34:22.055 }' 00:34:22.055 18:32:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.314 [2024-12-06 18:32:53.086842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:22.314 [2024-12-06 18:32:53.153607] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:22.314 [2024-12-06 18:32:53.153673] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:22.314 [2024-12-06 18:32:53.153697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:22.314 [2024-12-06 18:32:53.153706] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:22.314 "name": "raid_bdev1", 00:34:22.314 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:22.314 "strip_size_kb": 64, 00:34:22.314 "state": "online", 00:34:22.314 "raid_level": "raid5f", 00:34:22.314 "superblock": false, 00:34:22.314 "num_base_bdevs": 3, 00:34:22.314 "num_base_bdevs_discovered": 2, 00:34:22.314 "num_base_bdevs_operational": 2, 00:34:22.314 "base_bdevs_list": [ 00:34:22.314 { 00:34:22.314 "name": null, 00:34:22.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.314 "is_configured": false, 00:34:22.314 "data_offset": 0, 00:34:22.314 "data_size": 65536 00:34:22.314 }, 00:34:22.314 { 00:34:22.314 "name": "BaseBdev2", 00:34:22.314 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:22.314 "is_configured": true, 00:34:22.314 "data_offset": 0, 00:34:22.314 "data_size": 65536 00:34:22.314 }, 00:34:22.314 { 00:34:22.314 "name": "BaseBdev3", 00:34:22.314 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:22.314 "is_configured": true, 00:34:22.314 "data_offset": 0, 00:34:22.314 "data_size": 65536 00:34:22.314 } 00:34:22.314 ] 00:34:22.314 }' 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:22.314 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:22.881 "name": "raid_bdev1", 00:34:22.881 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:22.881 "strip_size_kb": 64, 00:34:22.881 "state": "online", 00:34:22.881 "raid_level": "raid5f", 00:34:22.881 "superblock": false, 00:34:22.881 "num_base_bdevs": 3, 00:34:22.881 "num_base_bdevs_discovered": 2, 00:34:22.881 "num_base_bdevs_operational": 2, 00:34:22.881 "base_bdevs_list": [ 00:34:22.881 { 00:34:22.881 "name": null, 00:34:22.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:22.881 "is_configured": false, 00:34:22.881 "data_offset": 0, 00:34:22.881 "data_size": 65536 00:34:22.881 }, 00:34:22.881 { 00:34:22.881 "name": "BaseBdev2", 00:34:22.881 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:22.881 "is_configured": true, 00:34:22.881 "data_offset": 0, 00:34:22.881 "data_size": 65536 00:34:22.881 }, 00:34:22.881 { 00:34:22.881 "name": "BaseBdev3", 00:34:22.881 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:22.881 "is_configured": true, 00:34:22.881 "data_offset": 0, 00:34:22.881 "data_size": 65536 00:34:22.881 } 00:34:22.881 ] 00:34:22.881 }' 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:22.881 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:22.882 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:22.882 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.882 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:22.882 [2024-12-06 18:32:53.732125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:22.882 [2024-12-06 18:32:53.749513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:34:22.882 18:32:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.882 18:32:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:22.882 [2024-12-06 18:32:53.757430] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:23.818 18:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:24.077 "name": "raid_bdev1", 00:34:24.077 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:24.077 "strip_size_kb": 64, 00:34:24.077 "state": "online", 00:34:24.077 "raid_level": "raid5f", 00:34:24.077 "superblock": false, 00:34:24.077 "num_base_bdevs": 3, 00:34:24.077 "num_base_bdevs_discovered": 3, 00:34:24.077 "num_base_bdevs_operational": 3, 00:34:24.077 "process": { 00:34:24.077 "type": "rebuild", 00:34:24.077 "target": "spare", 00:34:24.077 "progress": { 00:34:24.077 "blocks": 20480, 00:34:24.077 "percent": 15 00:34:24.077 } 00:34:24.077 }, 00:34:24.077 "base_bdevs_list": [ 00:34:24.077 { 00:34:24.077 "name": "spare", 00:34:24.077 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:24.077 "is_configured": true, 00:34:24.077 "data_offset": 0, 00:34:24.077 "data_size": 65536 00:34:24.077 }, 00:34:24.077 { 00:34:24.077 "name": "BaseBdev2", 00:34:24.077 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:24.077 "is_configured": true, 00:34:24.077 "data_offset": 0, 00:34:24.077 "data_size": 65536 00:34:24.077 }, 00:34:24.077 { 00:34:24.077 "name": "BaseBdev3", 00:34:24.077 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:24.077 "is_configured": true, 00:34:24.077 "data_offset": 0, 00:34:24.077 "data_size": 65536 00:34:24.077 } 00:34:24.077 ] 00:34:24.077 }' 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=551 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:24.077 "name": "raid_bdev1", 00:34:24.077 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:24.077 "strip_size_kb": 64, 00:34:24.077 "state": "online", 00:34:24.077 "raid_level": "raid5f", 00:34:24.077 "superblock": false, 00:34:24.077 "num_base_bdevs": 3, 00:34:24.077 "num_base_bdevs_discovered": 3, 00:34:24.077 "num_base_bdevs_operational": 3, 00:34:24.077 "process": { 00:34:24.077 "type": "rebuild", 00:34:24.077 "target": "spare", 00:34:24.077 "progress": { 00:34:24.077 "blocks": 22528, 00:34:24.077 "percent": 17 00:34:24.077 } 00:34:24.077 }, 00:34:24.077 "base_bdevs_list": [ 00:34:24.077 { 00:34:24.077 "name": "spare", 00:34:24.077 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:24.077 "is_configured": true, 00:34:24.077 "data_offset": 0, 00:34:24.077 "data_size": 65536 00:34:24.077 }, 00:34:24.077 { 00:34:24.077 "name": "BaseBdev2", 00:34:24.077 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:24.077 "is_configured": true, 00:34:24.077 "data_offset": 0, 00:34:24.077 "data_size": 65536 00:34:24.077 }, 00:34:24.077 { 00:34:24.077 "name": "BaseBdev3", 00:34:24.077 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:24.077 "is_configured": true, 00:34:24.077 "data_offset": 0, 00:34:24.077 "data_size": 65536 00:34:24.077 } 00:34:24.077 ] 00:34:24.077 }' 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:24.077 18:32:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:24.077 18:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:24.077 18:32:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:25.453 "name": "raid_bdev1", 00:34:25.453 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:25.453 "strip_size_kb": 64, 00:34:25.453 "state": "online", 00:34:25.453 "raid_level": "raid5f", 00:34:25.453 "superblock": false, 00:34:25.453 "num_base_bdevs": 3, 00:34:25.453 "num_base_bdevs_discovered": 3, 00:34:25.453 "num_base_bdevs_operational": 3, 00:34:25.453 "process": { 00:34:25.453 "type": "rebuild", 00:34:25.453 "target": "spare", 00:34:25.453 "progress": { 00:34:25.453 "blocks": 45056, 00:34:25.453 "percent": 34 00:34:25.453 } 00:34:25.453 }, 00:34:25.453 "base_bdevs_list": [ 00:34:25.453 { 00:34:25.453 "name": "spare", 00:34:25.453 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:25.453 "is_configured": true, 00:34:25.453 "data_offset": 0, 00:34:25.453 "data_size": 65536 00:34:25.453 }, 00:34:25.453 { 00:34:25.453 "name": "BaseBdev2", 00:34:25.453 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:25.453 "is_configured": true, 00:34:25.453 "data_offset": 0, 00:34:25.453 "data_size": 65536 00:34:25.453 }, 00:34:25.453 { 00:34:25.453 "name": "BaseBdev3", 00:34:25.453 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:25.453 "is_configured": true, 00:34:25.453 "data_offset": 0, 00:34:25.453 "data_size": 65536 00:34:25.453 } 00:34:25.453 ] 00:34:25.453 }' 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:25.453 18:32:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.388 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:26.388 "name": "raid_bdev1", 00:34:26.388 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:26.389 "strip_size_kb": 64, 00:34:26.389 "state": "online", 00:34:26.389 "raid_level": "raid5f", 00:34:26.389 "superblock": false, 00:34:26.389 "num_base_bdevs": 3, 00:34:26.389 "num_base_bdevs_discovered": 3, 00:34:26.389 "num_base_bdevs_operational": 3, 00:34:26.389 "process": { 00:34:26.389 "type": "rebuild", 00:34:26.389 "target": "spare", 00:34:26.389 "progress": { 00:34:26.389 "blocks": 69632, 00:34:26.389 "percent": 53 00:34:26.389 } 00:34:26.389 }, 00:34:26.389 "base_bdevs_list": [ 00:34:26.389 { 00:34:26.389 "name": "spare", 00:34:26.389 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:26.389 "is_configured": true, 00:34:26.389 "data_offset": 0, 00:34:26.389 "data_size": 65536 00:34:26.389 }, 00:34:26.389 { 00:34:26.389 "name": "BaseBdev2", 00:34:26.389 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:26.389 "is_configured": true, 00:34:26.389 "data_offset": 0, 00:34:26.389 "data_size": 65536 00:34:26.389 }, 00:34:26.389 { 00:34:26.389 "name": "BaseBdev3", 00:34:26.389 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:26.389 "is_configured": true, 00:34:26.389 "data_offset": 0, 00:34:26.389 "data_size": 65536 00:34:26.389 } 00:34:26.389 ] 00:34:26.389 }' 00:34:26.389 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:26.389 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:26.389 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:26.389 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:26.389 18:32:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:27.824 "name": "raid_bdev1", 00:34:27.824 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:27.824 "strip_size_kb": 64, 00:34:27.824 "state": "online", 00:34:27.824 "raid_level": "raid5f", 00:34:27.824 "superblock": false, 00:34:27.824 "num_base_bdevs": 3, 00:34:27.824 "num_base_bdevs_discovered": 3, 00:34:27.824 "num_base_bdevs_operational": 3, 00:34:27.824 "process": { 00:34:27.824 "type": "rebuild", 00:34:27.824 "target": "spare", 00:34:27.824 "progress": { 00:34:27.824 "blocks": 92160, 00:34:27.824 "percent": 70 00:34:27.824 } 00:34:27.824 }, 00:34:27.824 "base_bdevs_list": [ 00:34:27.824 { 00:34:27.824 "name": "spare", 00:34:27.824 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:27.824 "is_configured": true, 00:34:27.824 "data_offset": 0, 00:34:27.824 "data_size": 65536 00:34:27.824 }, 00:34:27.824 { 00:34:27.824 "name": "BaseBdev2", 00:34:27.824 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:27.824 "is_configured": true, 00:34:27.824 "data_offset": 0, 00:34:27.824 "data_size": 65536 00:34:27.824 }, 00:34:27.824 { 00:34:27.824 "name": "BaseBdev3", 00:34:27.824 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:27.824 "is_configured": true, 00:34:27.824 "data_offset": 0, 00:34:27.824 "data_size": 65536 00:34:27.824 } 00:34:27.824 ] 00:34:27.824 }' 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:27.824 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:27.825 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:27.825 18:32:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:28.757 "name": "raid_bdev1", 00:34:28.757 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:28.757 "strip_size_kb": 64, 00:34:28.757 "state": "online", 00:34:28.757 "raid_level": "raid5f", 00:34:28.757 "superblock": false, 00:34:28.757 "num_base_bdevs": 3, 00:34:28.757 "num_base_bdevs_discovered": 3, 00:34:28.757 "num_base_bdevs_operational": 3, 00:34:28.757 "process": { 00:34:28.757 "type": "rebuild", 00:34:28.757 "target": "spare", 00:34:28.757 "progress": { 00:34:28.757 "blocks": 114688, 00:34:28.757 "percent": 87 00:34:28.757 } 00:34:28.757 }, 00:34:28.757 "base_bdevs_list": [ 00:34:28.757 { 00:34:28.757 "name": "spare", 00:34:28.757 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:28.757 "is_configured": true, 00:34:28.757 "data_offset": 0, 00:34:28.757 "data_size": 65536 00:34:28.757 }, 00:34:28.757 { 00:34:28.757 "name": "BaseBdev2", 00:34:28.757 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:28.757 "is_configured": true, 00:34:28.757 "data_offset": 0, 00:34:28.757 "data_size": 65536 00:34:28.757 }, 00:34:28.757 { 00:34:28.757 "name": "BaseBdev3", 00:34:28.757 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:28.757 "is_configured": true, 00:34:28.757 "data_offset": 0, 00:34:28.757 "data_size": 65536 00:34:28.757 } 00:34:28.757 ] 00:34:28.757 }' 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:28.757 18:32:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:29.323 [2024-12-06 18:33:00.206949] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:29.323 [2024-12-06 18:33:00.207354] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:29.323 [2024-12-06 18:33:00.207420] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:29.888 "name": "raid_bdev1", 00:34:29.888 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:29.888 "strip_size_kb": 64, 00:34:29.888 "state": "online", 00:34:29.888 "raid_level": "raid5f", 00:34:29.888 "superblock": false, 00:34:29.888 "num_base_bdevs": 3, 00:34:29.888 "num_base_bdevs_discovered": 3, 00:34:29.888 "num_base_bdevs_operational": 3, 00:34:29.888 "base_bdevs_list": [ 00:34:29.888 { 00:34:29.888 "name": "spare", 00:34:29.888 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:29.888 "is_configured": true, 00:34:29.888 "data_offset": 0, 00:34:29.888 "data_size": 65536 00:34:29.888 }, 00:34:29.888 { 00:34:29.888 "name": "BaseBdev2", 00:34:29.888 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:29.888 "is_configured": true, 00:34:29.888 "data_offset": 0, 00:34:29.888 "data_size": 65536 00:34:29.888 }, 00:34:29.888 { 00:34:29.888 "name": "BaseBdev3", 00:34:29.888 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:29.888 "is_configured": true, 00:34:29.888 "data_offset": 0, 00:34:29.888 "data_size": 65536 00:34:29.888 } 00:34:29.888 ] 00:34:29.888 }' 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:29.888 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:29.889 "name": "raid_bdev1", 00:34:29.889 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:29.889 "strip_size_kb": 64, 00:34:29.889 "state": "online", 00:34:29.889 "raid_level": "raid5f", 00:34:29.889 "superblock": false, 00:34:29.889 "num_base_bdevs": 3, 00:34:29.889 "num_base_bdevs_discovered": 3, 00:34:29.889 "num_base_bdevs_operational": 3, 00:34:29.889 "base_bdevs_list": [ 00:34:29.889 { 00:34:29.889 "name": "spare", 00:34:29.889 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:29.889 "is_configured": true, 00:34:29.889 "data_offset": 0, 00:34:29.889 "data_size": 65536 00:34:29.889 }, 00:34:29.889 { 00:34:29.889 "name": "BaseBdev2", 00:34:29.889 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:29.889 "is_configured": true, 00:34:29.889 "data_offset": 0, 00:34:29.889 "data_size": 65536 00:34:29.889 }, 00:34:29.889 { 00:34:29.889 "name": "BaseBdev3", 00:34:29.889 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:29.889 "is_configured": true, 00:34:29.889 "data_offset": 0, 00:34:29.889 "data_size": 65536 00:34:29.889 } 00:34:29.889 ] 00:34:29.889 }' 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:29.889 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:30.147 "name": "raid_bdev1", 00:34:30.147 "uuid": "d5e85e09-def4-414f-b0a5-fde06bae210d", 00:34:30.147 "strip_size_kb": 64, 00:34:30.147 "state": "online", 00:34:30.147 "raid_level": "raid5f", 00:34:30.147 "superblock": false, 00:34:30.147 "num_base_bdevs": 3, 00:34:30.147 "num_base_bdevs_discovered": 3, 00:34:30.147 "num_base_bdevs_operational": 3, 00:34:30.147 "base_bdevs_list": [ 00:34:30.147 { 00:34:30.147 "name": "spare", 00:34:30.147 "uuid": "0a08ca41-8119-50b4-8283-a8ea1c028e4b", 00:34:30.147 "is_configured": true, 00:34:30.147 "data_offset": 0, 00:34:30.147 "data_size": 65536 00:34:30.147 }, 00:34:30.147 { 00:34:30.147 "name": "BaseBdev2", 00:34:30.147 "uuid": "1f28579d-e141-5f15-8bff-0d207ea4db26", 00:34:30.147 "is_configured": true, 00:34:30.147 "data_offset": 0, 00:34:30.147 "data_size": 65536 00:34:30.147 }, 00:34:30.147 { 00:34:30.147 "name": "BaseBdev3", 00:34:30.147 "uuid": "761c7305-2ef1-527b-894e-550a766b318f", 00:34:30.147 "is_configured": true, 00:34:30.147 "data_offset": 0, 00:34:30.147 "data_size": 65536 00:34:30.147 } 00:34:30.147 ] 00:34:30.147 }' 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:30.147 18:33:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:30.405 [2024-12-06 18:33:01.242514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:30.405 [2024-12-06 18:33:01.242692] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:30.405 [2024-12-06 18:33:01.242907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:30.405 [2024-12-06 18:33:01.243017] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:30.405 [2024-12-06 18:33:01.243040] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:30.405 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:30.664 /dev/nbd0 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:30.664 1+0 records in 00:34:30.664 1+0 records out 00:34:30.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405102 s, 10.1 MB/s 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:30.664 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:30.922 /dev/nbd1 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:30.922 1+0 records in 00:34:30.922 1+0 records out 00:34:30.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487238 s, 8.4 MB/s 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:30.922 18:33:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:34:31.181 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:31.181 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:31.181 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:31.181 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:31.181 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:34:31.181 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:31.181 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:31.440 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:31.698 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:31.698 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:31.698 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:31.698 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:31.698 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:31.698 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:31.698 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:34:31.698 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81303 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81303 ']' 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81303 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81303 00:34:31.699 killing process with pid 81303 00:34:31.699 Received shutdown signal, test time was about 60.000000 seconds 00:34:31.699 00:34:31.699 Latency(us) 00:34:31.699 [2024-12-06T18:33:02.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:31.699 [2024-12-06T18:33:02.648Z] =================================================================================================================== 00:34:31.699 [2024-12-06T18:33:02.648Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81303' 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81303 00:34:31.699 [2024-12-06 18:33:02.510092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:31.699 18:33:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81303 00:34:32.267 [2024-12-06 18:33:02.949080] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:34:33.646 00:34:33.646 real 0m15.256s 00:34:33.646 user 0m18.255s 00:34:33.646 sys 0m2.411s 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:34:33.646 ************************************ 00:34:33.646 END TEST raid5f_rebuild_test 00:34:33.646 ************************************ 00:34:33.646 18:33:04 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:34:33.646 18:33:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:33.646 18:33:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:33.646 18:33:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:33.646 ************************************ 00:34:33.646 START TEST raid5f_rebuild_test_sb 00:34:33.646 ************************************ 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=81743 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 81743 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 81743 ']' 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:33.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:33.646 18:33:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:33.646 I/O size of 3145728 is greater than zero copy threshold (65536). 00:34:33.646 Zero copy mechanism will not be used. 00:34:33.646 [2024-12-06 18:33:04.374889] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:34:33.646 [2024-12-06 18:33:04.375043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81743 ] 00:34:33.646 [2024-12-06 18:33:04.558178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:33.905 [2024-12-06 18:33:04.698236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:34.164 [2024-12-06 18:33:04.945561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:34.164 [2024-12-06 18:33:04.945769] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:34.422 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:34.422 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:34:34.422 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:34.422 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:34:34.422 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.422 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.422 BaseBdev1_malloc 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.423 [2024-12-06 18:33:05.252568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:34.423 [2024-12-06 18:33:05.252645] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:34.423 [2024-12-06 18:33:05.252673] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:34:34.423 [2024-12-06 18:33:05.252689] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:34.423 [2024-12-06 18:33:05.255445] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:34.423 [2024-12-06 18:33:05.255490] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:34.423 BaseBdev1 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.423 BaseBdev2_malloc 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.423 [2024-12-06 18:33:05.313826] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:34:34.423 [2024-12-06 18:33:05.313894] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:34.423 [2024-12-06 18:33:05.313924] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:34:34.423 [2024-12-06 18:33:05.313939] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:34.423 [2024-12-06 18:33:05.316654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:34.423 [2024-12-06 18:33:05.316699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:34:34.423 BaseBdev2 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.423 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.681 BaseBdev3_malloc 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.681 [2024-12-06 18:33:05.390204] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:34:34.681 [2024-12-06 18:33:05.390264] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:34.681 [2024-12-06 18:33:05.390290] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:34:34.681 [2024-12-06 18:33:05.390306] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:34.681 [2024-12-06 18:33:05.393002] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:34.681 [2024-12-06 18:33:05.393049] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:34:34.681 BaseBdev3 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.681 spare_malloc 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.681 spare_delay 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.681 [2024-12-06 18:33:05.464446] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:34.681 [2024-12-06 18:33:05.464511] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:34.681 [2024-12-06 18:33:05.464533] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:34:34.681 [2024-12-06 18:33:05.464548] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:34.681 [2024-12-06 18:33:05.467233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:34.681 [2024-12-06 18:33:05.467279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:34.681 spare 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.681 [2024-12-06 18:33:05.476506] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:34.681 [2024-12-06 18:33:05.478838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:34.681 [2024-12-06 18:33:05.479038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:34.681 [2024-12-06 18:33:05.479256] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:34:34.681 [2024-12-06 18:33:05.479271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:34.681 [2024-12-06 18:33:05.479540] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:34.681 [2024-12-06 18:33:05.485683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:34:34.681 [2024-12-06 18:33:05.485832] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:34:34.681 [2024-12-06 18:33:05.486201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:34.681 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:34.682 "name": "raid_bdev1", 00:34:34.682 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:34.682 "strip_size_kb": 64, 00:34:34.682 "state": "online", 00:34:34.682 "raid_level": "raid5f", 00:34:34.682 "superblock": true, 00:34:34.682 "num_base_bdevs": 3, 00:34:34.682 "num_base_bdevs_discovered": 3, 00:34:34.682 "num_base_bdevs_operational": 3, 00:34:34.682 "base_bdevs_list": [ 00:34:34.682 { 00:34:34.682 "name": "BaseBdev1", 00:34:34.682 "uuid": "16e857d5-6e5c-5a45-b96d-f30f65137c3c", 00:34:34.682 "is_configured": true, 00:34:34.682 "data_offset": 2048, 00:34:34.682 "data_size": 63488 00:34:34.682 }, 00:34:34.682 { 00:34:34.682 "name": "BaseBdev2", 00:34:34.682 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:34.682 "is_configured": true, 00:34:34.682 "data_offset": 2048, 00:34:34.682 "data_size": 63488 00:34:34.682 }, 00:34:34.682 { 00:34:34.682 "name": "BaseBdev3", 00:34:34.682 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:34.682 "is_configured": true, 00:34:34.682 "data_offset": 2048, 00:34:34.682 "data_size": 63488 00:34:34.682 } 00:34:34.682 ] 00:34:34.682 }' 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:34.682 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.248 [2024-12-06 18:33:05.913135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:34:35.248 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:35.249 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:35.249 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:35.249 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:35.249 18:33:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:34:35.249 [2024-12-06 18:33:06.188582] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:34:35.507 /dev/nbd0 00:34:35.507 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:35.507 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:35.507 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:35.507 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:34:35.507 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:35.507 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:35.508 1+0 records in 00:34:35.508 1+0 records out 00:34:35.508 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382532 s, 10.7 MB/s 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:34:35.508 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:34:35.766 496+0 records in 00:34:35.766 496+0 records out 00:34:35.766 65011712 bytes (65 MB, 62 MiB) copied, 0.381357 s, 170 MB/s 00:34:35.766 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:34:35.766 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:35.766 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:35.766 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:35.766 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:35.766 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:35.766 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:36.039 [2024-12-06 18:33:06.872772] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:36.039 [2024-12-06 18:33:06.888390] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:36.039 "name": "raid_bdev1", 00:34:36.039 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:36.039 "strip_size_kb": 64, 00:34:36.039 "state": "online", 00:34:36.039 "raid_level": "raid5f", 00:34:36.039 "superblock": true, 00:34:36.039 "num_base_bdevs": 3, 00:34:36.039 "num_base_bdevs_discovered": 2, 00:34:36.039 "num_base_bdevs_operational": 2, 00:34:36.039 "base_bdevs_list": [ 00:34:36.039 { 00:34:36.039 "name": null, 00:34:36.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:36.039 "is_configured": false, 00:34:36.039 "data_offset": 0, 00:34:36.039 "data_size": 63488 00:34:36.039 }, 00:34:36.039 { 00:34:36.039 "name": "BaseBdev2", 00:34:36.039 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:36.039 "is_configured": true, 00:34:36.039 "data_offset": 2048, 00:34:36.039 "data_size": 63488 00:34:36.039 }, 00:34:36.039 { 00:34:36.039 "name": "BaseBdev3", 00:34:36.039 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:36.039 "is_configured": true, 00:34:36.039 "data_offset": 2048, 00:34:36.039 "data_size": 63488 00:34:36.039 } 00:34:36.039 ] 00:34:36.039 }' 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:36.039 18:33:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:36.635 18:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:36.635 18:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.635 18:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:36.635 [2024-12-06 18:33:07.331889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:36.635 [2024-12-06 18:33:07.350965] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:34:36.635 18:33:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.635 18:33:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:34:36.635 [2024-12-06 18:33:07.359301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:37.571 "name": "raid_bdev1", 00:34:37.571 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:37.571 "strip_size_kb": 64, 00:34:37.571 "state": "online", 00:34:37.571 "raid_level": "raid5f", 00:34:37.571 "superblock": true, 00:34:37.571 "num_base_bdevs": 3, 00:34:37.571 "num_base_bdevs_discovered": 3, 00:34:37.571 "num_base_bdevs_operational": 3, 00:34:37.571 "process": { 00:34:37.571 "type": "rebuild", 00:34:37.571 "target": "spare", 00:34:37.571 "progress": { 00:34:37.571 "blocks": 20480, 00:34:37.571 "percent": 16 00:34:37.571 } 00:34:37.571 }, 00:34:37.571 "base_bdevs_list": [ 00:34:37.571 { 00:34:37.571 "name": "spare", 00:34:37.571 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:37.571 "is_configured": true, 00:34:37.571 "data_offset": 2048, 00:34:37.571 "data_size": 63488 00:34:37.571 }, 00:34:37.571 { 00:34:37.571 "name": "BaseBdev2", 00:34:37.571 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:37.571 "is_configured": true, 00:34:37.571 "data_offset": 2048, 00:34:37.571 "data_size": 63488 00:34:37.571 }, 00:34:37.571 { 00:34:37.571 "name": "BaseBdev3", 00:34:37.571 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:37.571 "is_configured": true, 00:34:37.571 "data_offset": 2048, 00:34:37.571 "data_size": 63488 00:34:37.571 } 00:34:37.571 ] 00:34:37.571 }' 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.571 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:37.571 [2024-12-06 18:33:08.503581] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:37.830 [2024-12-06 18:33:08.570385] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:37.830 [2024-12-06 18:33:08.570607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:37.830 [2024-12-06 18:33:08.570761] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:37.830 [2024-12-06 18:33:08.570807] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:37.830 "name": "raid_bdev1", 00:34:37.830 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:37.830 "strip_size_kb": 64, 00:34:37.830 "state": "online", 00:34:37.830 "raid_level": "raid5f", 00:34:37.830 "superblock": true, 00:34:37.830 "num_base_bdevs": 3, 00:34:37.830 "num_base_bdevs_discovered": 2, 00:34:37.830 "num_base_bdevs_operational": 2, 00:34:37.830 "base_bdevs_list": [ 00:34:37.830 { 00:34:37.830 "name": null, 00:34:37.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:37.830 "is_configured": false, 00:34:37.830 "data_offset": 0, 00:34:37.830 "data_size": 63488 00:34:37.830 }, 00:34:37.830 { 00:34:37.830 "name": "BaseBdev2", 00:34:37.830 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:37.830 "is_configured": true, 00:34:37.830 "data_offset": 2048, 00:34:37.830 "data_size": 63488 00:34:37.830 }, 00:34:37.830 { 00:34:37.830 "name": "BaseBdev3", 00:34:37.830 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:37.830 "is_configured": true, 00:34:37.830 "data_offset": 2048, 00:34:37.830 "data_size": 63488 00:34:37.830 } 00:34:37.830 ] 00:34:37.830 }' 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:37.830 18:33:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.398 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:38.398 "name": "raid_bdev1", 00:34:38.398 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:38.398 "strip_size_kb": 64, 00:34:38.398 "state": "online", 00:34:38.398 "raid_level": "raid5f", 00:34:38.398 "superblock": true, 00:34:38.398 "num_base_bdevs": 3, 00:34:38.398 "num_base_bdevs_discovered": 2, 00:34:38.398 "num_base_bdevs_operational": 2, 00:34:38.398 "base_bdevs_list": [ 00:34:38.398 { 00:34:38.398 "name": null, 00:34:38.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:38.398 "is_configured": false, 00:34:38.398 "data_offset": 0, 00:34:38.398 "data_size": 63488 00:34:38.398 }, 00:34:38.398 { 00:34:38.398 "name": "BaseBdev2", 00:34:38.398 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:38.398 "is_configured": true, 00:34:38.399 "data_offset": 2048, 00:34:38.399 "data_size": 63488 00:34:38.399 }, 00:34:38.399 { 00:34:38.399 "name": "BaseBdev3", 00:34:38.399 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:38.399 "is_configured": true, 00:34:38.399 "data_offset": 2048, 00:34:38.399 "data_size": 63488 00:34:38.399 } 00:34:38.399 ] 00:34:38.399 }' 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:38.399 [2024-12-06 18:33:09.184343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:38.399 [2024-12-06 18:33:09.201403] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.399 18:33:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:34:38.399 [2024-12-06 18:33:09.209724] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:39.334 "name": "raid_bdev1", 00:34:39.334 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:39.334 "strip_size_kb": 64, 00:34:39.334 "state": "online", 00:34:39.334 "raid_level": "raid5f", 00:34:39.334 "superblock": true, 00:34:39.334 "num_base_bdevs": 3, 00:34:39.334 "num_base_bdevs_discovered": 3, 00:34:39.334 "num_base_bdevs_operational": 3, 00:34:39.334 "process": { 00:34:39.334 "type": "rebuild", 00:34:39.334 "target": "spare", 00:34:39.334 "progress": { 00:34:39.334 "blocks": 20480, 00:34:39.334 "percent": 16 00:34:39.334 } 00:34:39.334 }, 00:34:39.334 "base_bdevs_list": [ 00:34:39.334 { 00:34:39.334 "name": "spare", 00:34:39.334 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:39.334 "is_configured": true, 00:34:39.334 "data_offset": 2048, 00:34:39.334 "data_size": 63488 00:34:39.334 }, 00:34:39.334 { 00:34:39.334 "name": "BaseBdev2", 00:34:39.334 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:39.334 "is_configured": true, 00:34:39.334 "data_offset": 2048, 00:34:39.334 "data_size": 63488 00:34:39.334 }, 00:34:39.334 { 00:34:39.334 "name": "BaseBdev3", 00:34:39.334 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:39.334 "is_configured": true, 00:34:39.334 "data_offset": 2048, 00:34:39.334 "data_size": 63488 00:34:39.334 } 00:34:39.334 ] 00:34:39.334 }' 00:34:39.334 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:39.592 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:39.592 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:39.592 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:39.592 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:34:39.592 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:34:39.592 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=567 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:39.593 "name": "raid_bdev1", 00:34:39.593 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:39.593 "strip_size_kb": 64, 00:34:39.593 "state": "online", 00:34:39.593 "raid_level": "raid5f", 00:34:39.593 "superblock": true, 00:34:39.593 "num_base_bdevs": 3, 00:34:39.593 "num_base_bdevs_discovered": 3, 00:34:39.593 "num_base_bdevs_operational": 3, 00:34:39.593 "process": { 00:34:39.593 "type": "rebuild", 00:34:39.593 "target": "spare", 00:34:39.593 "progress": { 00:34:39.593 "blocks": 22528, 00:34:39.593 "percent": 17 00:34:39.593 } 00:34:39.593 }, 00:34:39.593 "base_bdevs_list": [ 00:34:39.593 { 00:34:39.593 "name": "spare", 00:34:39.593 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:39.593 "is_configured": true, 00:34:39.593 "data_offset": 2048, 00:34:39.593 "data_size": 63488 00:34:39.593 }, 00:34:39.593 { 00:34:39.593 "name": "BaseBdev2", 00:34:39.593 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:39.593 "is_configured": true, 00:34:39.593 "data_offset": 2048, 00:34:39.593 "data_size": 63488 00:34:39.593 }, 00:34:39.593 { 00:34:39.593 "name": "BaseBdev3", 00:34:39.593 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:39.593 "is_configured": true, 00:34:39.593 "data_offset": 2048, 00:34:39.593 "data_size": 63488 00:34:39.593 } 00:34:39.593 ] 00:34:39.593 }' 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:39.593 18:33:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.969 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:40.969 "name": "raid_bdev1", 00:34:40.969 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:40.969 "strip_size_kb": 64, 00:34:40.969 "state": "online", 00:34:40.969 "raid_level": "raid5f", 00:34:40.969 "superblock": true, 00:34:40.969 "num_base_bdevs": 3, 00:34:40.969 "num_base_bdevs_discovered": 3, 00:34:40.969 "num_base_bdevs_operational": 3, 00:34:40.969 "process": { 00:34:40.969 "type": "rebuild", 00:34:40.969 "target": "spare", 00:34:40.969 "progress": { 00:34:40.969 "blocks": 45056, 00:34:40.969 "percent": 35 00:34:40.969 } 00:34:40.969 }, 00:34:40.969 "base_bdevs_list": [ 00:34:40.969 { 00:34:40.969 "name": "spare", 00:34:40.969 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:40.969 "is_configured": true, 00:34:40.969 "data_offset": 2048, 00:34:40.969 "data_size": 63488 00:34:40.969 }, 00:34:40.969 { 00:34:40.969 "name": "BaseBdev2", 00:34:40.969 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:40.969 "is_configured": true, 00:34:40.969 "data_offset": 2048, 00:34:40.969 "data_size": 63488 00:34:40.969 }, 00:34:40.970 { 00:34:40.970 "name": "BaseBdev3", 00:34:40.970 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:40.970 "is_configured": true, 00:34:40.970 "data_offset": 2048, 00:34:40.970 "data_size": 63488 00:34:40.970 } 00:34:40.970 ] 00:34:40.970 }' 00:34:40.970 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:40.970 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:40.970 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:40.970 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:40.970 18:33:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:41.908 "name": "raid_bdev1", 00:34:41.908 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:41.908 "strip_size_kb": 64, 00:34:41.908 "state": "online", 00:34:41.908 "raid_level": "raid5f", 00:34:41.908 "superblock": true, 00:34:41.908 "num_base_bdevs": 3, 00:34:41.908 "num_base_bdevs_discovered": 3, 00:34:41.908 "num_base_bdevs_operational": 3, 00:34:41.908 "process": { 00:34:41.908 "type": "rebuild", 00:34:41.908 "target": "spare", 00:34:41.908 "progress": { 00:34:41.908 "blocks": 67584, 00:34:41.908 "percent": 53 00:34:41.908 } 00:34:41.908 }, 00:34:41.908 "base_bdevs_list": [ 00:34:41.908 { 00:34:41.908 "name": "spare", 00:34:41.908 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:41.908 "is_configured": true, 00:34:41.908 "data_offset": 2048, 00:34:41.908 "data_size": 63488 00:34:41.908 }, 00:34:41.908 { 00:34:41.908 "name": "BaseBdev2", 00:34:41.908 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:41.908 "is_configured": true, 00:34:41.908 "data_offset": 2048, 00:34:41.908 "data_size": 63488 00:34:41.908 }, 00:34:41.908 { 00:34:41.908 "name": "BaseBdev3", 00:34:41.908 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:41.908 "is_configured": true, 00:34:41.908 "data_offset": 2048, 00:34:41.908 "data_size": 63488 00:34:41.908 } 00:34:41.908 ] 00:34:41.908 }' 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:41.908 18:33:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:42.845 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.103 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:43.103 "name": "raid_bdev1", 00:34:43.103 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:43.103 "strip_size_kb": 64, 00:34:43.103 "state": "online", 00:34:43.103 "raid_level": "raid5f", 00:34:43.103 "superblock": true, 00:34:43.103 "num_base_bdevs": 3, 00:34:43.103 "num_base_bdevs_discovered": 3, 00:34:43.104 "num_base_bdevs_operational": 3, 00:34:43.104 "process": { 00:34:43.104 "type": "rebuild", 00:34:43.104 "target": "spare", 00:34:43.104 "progress": { 00:34:43.104 "blocks": 92160, 00:34:43.104 "percent": 72 00:34:43.104 } 00:34:43.104 }, 00:34:43.104 "base_bdevs_list": [ 00:34:43.104 { 00:34:43.104 "name": "spare", 00:34:43.104 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:43.104 "is_configured": true, 00:34:43.104 "data_offset": 2048, 00:34:43.104 "data_size": 63488 00:34:43.104 }, 00:34:43.104 { 00:34:43.104 "name": "BaseBdev2", 00:34:43.104 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:43.104 "is_configured": true, 00:34:43.104 "data_offset": 2048, 00:34:43.104 "data_size": 63488 00:34:43.104 }, 00:34:43.104 { 00:34:43.104 "name": "BaseBdev3", 00:34:43.104 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:43.104 "is_configured": true, 00:34:43.104 "data_offset": 2048, 00:34:43.104 "data_size": 63488 00:34:43.104 } 00:34:43.104 ] 00:34:43.104 }' 00:34:43.104 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:43.104 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:43.104 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:43.104 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:43.104 18:33:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.039 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:44.039 "name": "raid_bdev1", 00:34:44.039 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:44.039 "strip_size_kb": 64, 00:34:44.039 "state": "online", 00:34:44.039 "raid_level": "raid5f", 00:34:44.039 "superblock": true, 00:34:44.039 "num_base_bdevs": 3, 00:34:44.039 "num_base_bdevs_discovered": 3, 00:34:44.039 "num_base_bdevs_operational": 3, 00:34:44.039 "process": { 00:34:44.039 "type": "rebuild", 00:34:44.039 "target": "spare", 00:34:44.039 "progress": { 00:34:44.039 "blocks": 114688, 00:34:44.039 "percent": 90 00:34:44.039 } 00:34:44.039 }, 00:34:44.039 "base_bdevs_list": [ 00:34:44.039 { 00:34:44.039 "name": "spare", 00:34:44.039 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:44.039 "is_configured": true, 00:34:44.039 "data_offset": 2048, 00:34:44.039 "data_size": 63488 00:34:44.039 }, 00:34:44.039 { 00:34:44.039 "name": "BaseBdev2", 00:34:44.039 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:44.039 "is_configured": true, 00:34:44.039 "data_offset": 2048, 00:34:44.039 "data_size": 63488 00:34:44.039 }, 00:34:44.039 { 00:34:44.039 "name": "BaseBdev3", 00:34:44.039 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:44.040 "is_configured": true, 00:34:44.040 "data_offset": 2048, 00:34:44.040 "data_size": 63488 00:34:44.040 } 00:34:44.040 ] 00:34:44.040 }' 00:34:44.040 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:44.301 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:44.301 18:33:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:44.301 18:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:44.301 18:33:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:34:44.561 [2024-12-06 18:33:15.456354] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:34:44.561 [2024-12-06 18:33:15.456435] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:34:44.561 [2024-12-06 18:33:15.456555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.132 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:45.390 "name": "raid_bdev1", 00:34:45.390 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:45.390 "strip_size_kb": 64, 00:34:45.390 "state": "online", 00:34:45.390 "raid_level": "raid5f", 00:34:45.390 "superblock": true, 00:34:45.390 "num_base_bdevs": 3, 00:34:45.390 "num_base_bdevs_discovered": 3, 00:34:45.390 "num_base_bdevs_operational": 3, 00:34:45.390 "base_bdevs_list": [ 00:34:45.390 { 00:34:45.390 "name": "spare", 00:34:45.390 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:45.390 "is_configured": true, 00:34:45.390 "data_offset": 2048, 00:34:45.390 "data_size": 63488 00:34:45.390 }, 00:34:45.390 { 00:34:45.390 "name": "BaseBdev2", 00:34:45.390 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:45.390 "is_configured": true, 00:34:45.390 "data_offset": 2048, 00:34:45.390 "data_size": 63488 00:34:45.390 }, 00:34:45.390 { 00:34:45.390 "name": "BaseBdev3", 00:34:45.390 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:45.390 "is_configured": true, 00:34:45.390 "data_offset": 2048, 00:34:45.390 "data_size": 63488 00:34:45.390 } 00:34:45.390 ] 00:34:45.390 }' 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:45.390 "name": "raid_bdev1", 00:34:45.390 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:45.390 "strip_size_kb": 64, 00:34:45.390 "state": "online", 00:34:45.390 "raid_level": "raid5f", 00:34:45.390 "superblock": true, 00:34:45.390 "num_base_bdevs": 3, 00:34:45.390 "num_base_bdevs_discovered": 3, 00:34:45.390 "num_base_bdevs_operational": 3, 00:34:45.390 "base_bdevs_list": [ 00:34:45.390 { 00:34:45.390 "name": "spare", 00:34:45.390 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:45.390 "is_configured": true, 00:34:45.390 "data_offset": 2048, 00:34:45.390 "data_size": 63488 00:34:45.390 }, 00:34:45.390 { 00:34:45.390 "name": "BaseBdev2", 00:34:45.390 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:45.390 "is_configured": true, 00:34:45.390 "data_offset": 2048, 00:34:45.390 "data_size": 63488 00:34:45.390 }, 00:34:45.390 { 00:34:45.390 "name": "BaseBdev3", 00:34:45.390 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:45.390 "is_configured": true, 00:34:45.390 "data_offset": 2048, 00:34:45.390 "data_size": 63488 00:34:45.390 } 00:34:45.390 ] 00:34:45.390 }' 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:45.390 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:45.646 "name": "raid_bdev1", 00:34:45.646 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:45.646 "strip_size_kb": 64, 00:34:45.646 "state": "online", 00:34:45.646 "raid_level": "raid5f", 00:34:45.646 "superblock": true, 00:34:45.646 "num_base_bdevs": 3, 00:34:45.646 "num_base_bdevs_discovered": 3, 00:34:45.646 "num_base_bdevs_operational": 3, 00:34:45.646 "base_bdevs_list": [ 00:34:45.646 { 00:34:45.646 "name": "spare", 00:34:45.646 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:45.646 "is_configured": true, 00:34:45.646 "data_offset": 2048, 00:34:45.646 "data_size": 63488 00:34:45.646 }, 00:34:45.646 { 00:34:45.646 "name": "BaseBdev2", 00:34:45.646 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:45.646 "is_configured": true, 00:34:45.646 "data_offset": 2048, 00:34:45.646 "data_size": 63488 00:34:45.646 }, 00:34:45.646 { 00:34:45.646 "name": "BaseBdev3", 00:34:45.646 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:45.646 "is_configured": true, 00:34:45.646 "data_offset": 2048, 00:34:45.646 "data_size": 63488 00:34:45.646 } 00:34:45.646 ] 00:34:45.646 }' 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:45.646 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.905 [2024-12-06 18:33:16.717634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:34:45.905 [2024-12-06 18:33:16.717784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:34:45.905 [2024-12-06 18:33:16.717912] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:45.905 [2024-12-06 18:33:16.718011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:45.905 [2024-12-06 18:33:16.718032] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:45.905 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:34:46.165 /dev/nbd0 00:34:46.165 18:33:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:46.165 1+0 records in 00:34:46.165 1+0 records out 00:34:46.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255342 s, 16.0 MB/s 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:46.165 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:34:46.424 /dev/nbd1 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:46.424 1+0 records in 00:34:46.424 1+0 records out 00:34:46.424 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468673 s, 8.7 MB/s 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:34:46.424 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:34:46.682 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:34:46.682 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:34:46.682 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:34:46.682 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:46.682 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:34:46.682 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:46.682 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:46.941 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.201 [2024-12-06 18:33:17.930466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:47.201 [2024-12-06 18:33:17.930542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:47.201 [2024-12-06 18:33:17.930570] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:34:47.201 [2024-12-06 18:33:17.930594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:47.201 [2024-12-06 18:33:17.933570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:47.201 [2024-12-06 18:33:17.933618] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:47.201 [2024-12-06 18:33:17.933722] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:47.201 [2024-12-06 18:33:17.933790] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:47.201 [2024-12-06 18:33:17.933971] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:47.201 [2024-12-06 18:33:17.934082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:47.201 spare 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.201 18:33:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.201 [2024-12-06 18:33:18.034031] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:34:47.201 [2024-12-06 18:33:18.034062] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:34:47.201 [2024-12-06 18:33:18.034429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:34:47.201 [2024-12-06 18:33:18.040330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:34:47.201 [2024-12-06 18:33:18.040353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:34:47.201 [2024-12-06 18:33:18.040554] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:47.201 "name": "raid_bdev1", 00:34:47.201 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:47.201 "strip_size_kb": 64, 00:34:47.201 "state": "online", 00:34:47.201 "raid_level": "raid5f", 00:34:47.201 "superblock": true, 00:34:47.201 "num_base_bdevs": 3, 00:34:47.201 "num_base_bdevs_discovered": 3, 00:34:47.201 "num_base_bdevs_operational": 3, 00:34:47.201 "base_bdevs_list": [ 00:34:47.201 { 00:34:47.201 "name": "spare", 00:34:47.201 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:47.201 "is_configured": true, 00:34:47.201 "data_offset": 2048, 00:34:47.201 "data_size": 63488 00:34:47.201 }, 00:34:47.201 { 00:34:47.201 "name": "BaseBdev2", 00:34:47.201 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:47.201 "is_configured": true, 00:34:47.201 "data_offset": 2048, 00:34:47.201 "data_size": 63488 00:34:47.201 }, 00:34:47.201 { 00:34:47.201 "name": "BaseBdev3", 00:34:47.201 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:47.201 "is_configured": true, 00:34:47.201 "data_offset": 2048, 00:34:47.201 "data_size": 63488 00:34:47.201 } 00:34:47.201 ] 00:34:47.201 }' 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:47.201 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:47.770 "name": "raid_bdev1", 00:34:47.770 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:47.770 "strip_size_kb": 64, 00:34:47.770 "state": "online", 00:34:47.770 "raid_level": "raid5f", 00:34:47.770 "superblock": true, 00:34:47.770 "num_base_bdevs": 3, 00:34:47.770 "num_base_bdevs_discovered": 3, 00:34:47.770 "num_base_bdevs_operational": 3, 00:34:47.770 "base_bdevs_list": [ 00:34:47.770 { 00:34:47.770 "name": "spare", 00:34:47.770 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:47.770 "is_configured": true, 00:34:47.770 "data_offset": 2048, 00:34:47.770 "data_size": 63488 00:34:47.770 }, 00:34:47.770 { 00:34:47.770 "name": "BaseBdev2", 00:34:47.770 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:47.770 "is_configured": true, 00:34:47.770 "data_offset": 2048, 00:34:47.770 "data_size": 63488 00:34:47.770 }, 00:34:47.770 { 00:34:47.770 "name": "BaseBdev3", 00:34:47.770 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:47.770 "is_configured": true, 00:34:47.770 "data_offset": 2048, 00:34:47.770 "data_size": 63488 00:34:47.770 } 00:34:47.770 ] 00:34:47.770 }' 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.770 [2024-12-06 18:33:18.643264] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:47.770 "name": "raid_bdev1", 00:34:47.770 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:47.770 "strip_size_kb": 64, 00:34:47.770 "state": "online", 00:34:47.770 "raid_level": "raid5f", 00:34:47.770 "superblock": true, 00:34:47.770 "num_base_bdevs": 3, 00:34:47.770 "num_base_bdevs_discovered": 2, 00:34:47.770 "num_base_bdevs_operational": 2, 00:34:47.770 "base_bdevs_list": [ 00:34:47.770 { 00:34:47.770 "name": null, 00:34:47.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:47.770 "is_configured": false, 00:34:47.770 "data_offset": 0, 00:34:47.770 "data_size": 63488 00:34:47.770 }, 00:34:47.770 { 00:34:47.770 "name": "BaseBdev2", 00:34:47.770 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:47.770 "is_configured": true, 00:34:47.770 "data_offset": 2048, 00:34:47.770 "data_size": 63488 00:34:47.770 }, 00:34:47.770 { 00:34:47.770 "name": "BaseBdev3", 00:34:47.770 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:47.770 "is_configured": true, 00:34:47.770 "data_offset": 2048, 00:34:47.770 "data_size": 63488 00:34:47.770 } 00:34:47.770 ] 00:34:47.770 }' 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:47.770 18:33:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:48.336 18:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:34:48.336 18:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.336 18:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:48.336 [2024-12-06 18:33:19.066720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:48.336 [2024-12-06 18:33:19.066924] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:48.336 [2024-12-06 18:33:19.066946] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:48.336 [2024-12-06 18:33:19.066998] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:48.336 [2024-12-06 18:33:19.084043] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:34:48.336 18:33:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.336 18:33:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:34:48.336 [2024-12-06 18:33:19.092269] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.272 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:49.272 "name": "raid_bdev1", 00:34:49.272 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:49.272 "strip_size_kb": 64, 00:34:49.272 "state": "online", 00:34:49.272 "raid_level": "raid5f", 00:34:49.272 "superblock": true, 00:34:49.272 "num_base_bdevs": 3, 00:34:49.272 "num_base_bdevs_discovered": 3, 00:34:49.272 "num_base_bdevs_operational": 3, 00:34:49.272 "process": { 00:34:49.272 "type": "rebuild", 00:34:49.272 "target": "spare", 00:34:49.272 "progress": { 00:34:49.272 "blocks": 20480, 00:34:49.272 "percent": 16 00:34:49.272 } 00:34:49.272 }, 00:34:49.272 "base_bdevs_list": [ 00:34:49.272 { 00:34:49.272 "name": "spare", 00:34:49.272 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:49.272 "is_configured": true, 00:34:49.272 "data_offset": 2048, 00:34:49.273 "data_size": 63488 00:34:49.273 }, 00:34:49.273 { 00:34:49.273 "name": "BaseBdev2", 00:34:49.273 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:49.273 "is_configured": true, 00:34:49.273 "data_offset": 2048, 00:34:49.273 "data_size": 63488 00:34:49.273 }, 00:34:49.273 { 00:34:49.273 "name": "BaseBdev3", 00:34:49.273 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:49.273 "is_configured": true, 00:34:49.273 "data_offset": 2048, 00:34:49.273 "data_size": 63488 00:34:49.273 } 00:34:49.273 ] 00:34:49.273 }' 00:34:49.273 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:49.273 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:49.273 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:49.532 [2024-12-06 18:33:20.235640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:49.532 [2024-12-06 18:33:20.302377] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:49.532 [2024-12-06 18:33:20.302448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:49.532 [2024-12-06 18:33:20.302468] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:49.532 [2024-12-06 18:33:20.302480] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:49.532 "name": "raid_bdev1", 00:34:49.532 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:49.532 "strip_size_kb": 64, 00:34:49.532 "state": "online", 00:34:49.532 "raid_level": "raid5f", 00:34:49.532 "superblock": true, 00:34:49.532 "num_base_bdevs": 3, 00:34:49.532 "num_base_bdevs_discovered": 2, 00:34:49.532 "num_base_bdevs_operational": 2, 00:34:49.532 "base_bdevs_list": [ 00:34:49.532 { 00:34:49.532 "name": null, 00:34:49.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:49.532 "is_configured": false, 00:34:49.532 "data_offset": 0, 00:34:49.532 "data_size": 63488 00:34:49.532 }, 00:34:49.532 { 00:34:49.532 "name": "BaseBdev2", 00:34:49.532 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:49.532 "is_configured": true, 00:34:49.532 "data_offset": 2048, 00:34:49.532 "data_size": 63488 00:34:49.532 }, 00:34:49.532 { 00:34:49.532 "name": "BaseBdev3", 00:34:49.532 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:49.532 "is_configured": true, 00:34:49.532 "data_offset": 2048, 00:34:49.532 "data_size": 63488 00:34:49.532 } 00:34:49.532 ] 00:34:49.532 }' 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:49.532 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:50.102 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:34:50.102 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.102 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:50.102 [2024-12-06 18:33:20.764511] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:34:50.102 [2024-12-06 18:33:20.764773] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:50.102 [2024-12-06 18:33:20.764811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:34:50.102 [2024-12-06 18:33:20.764832] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:50.102 [2024-12-06 18:33:20.765443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:50.102 [2024-12-06 18:33:20.765469] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:34:50.102 [2024-12-06 18:33:20.765580] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:34:50.102 [2024-12-06 18:33:20.765601] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:34:50.102 [2024-12-06 18:33:20.765615] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:34:50.102 [2024-12-06 18:33:20.765642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:34:50.102 [2024-12-06 18:33:20.782525] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:34:50.102 spare 00:34:50.102 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.102 18:33:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:34:50.102 [2024-12-06 18:33:20.790702] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:51.040 "name": "raid_bdev1", 00:34:51.040 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:51.040 "strip_size_kb": 64, 00:34:51.040 "state": "online", 00:34:51.040 "raid_level": "raid5f", 00:34:51.040 "superblock": true, 00:34:51.040 "num_base_bdevs": 3, 00:34:51.040 "num_base_bdevs_discovered": 3, 00:34:51.040 "num_base_bdevs_operational": 3, 00:34:51.040 "process": { 00:34:51.040 "type": "rebuild", 00:34:51.040 "target": "spare", 00:34:51.040 "progress": { 00:34:51.040 "blocks": 20480, 00:34:51.040 "percent": 16 00:34:51.040 } 00:34:51.040 }, 00:34:51.040 "base_bdevs_list": [ 00:34:51.040 { 00:34:51.040 "name": "spare", 00:34:51.040 "uuid": "e3f5c210-2599-517f-b328-ec045f9fbe04", 00:34:51.040 "is_configured": true, 00:34:51.040 "data_offset": 2048, 00:34:51.040 "data_size": 63488 00:34:51.040 }, 00:34:51.040 { 00:34:51.040 "name": "BaseBdev2", 00:34:51.040 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:51.040 "is_configured": true, 00:34:51.040 "data_offset": 2048, 00:34:51.040 "data_size": 63488 00:34:51.040 }, 00:34:51.040 { 00:34:51.040 "name": "BaseBdev3", 00:34:51.040 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:51.040 "is_configured": true, 00:34:51.040 "data_offset": 2048, 00:34:51.040 "data_size": 63488 00:34:51.040 } 00:34:51.040 ] 00:34:51.040 }' 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.040 18:33:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.040 [2024-12-06 18:33:21.934372] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:51.299 [2024-12-06 18:33:22.000349] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:34:51.299 [2024-12-06 18:33:22.000404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:51.299 [2024-12-06 18:33:22.000426] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:34:51.299 [2024-12-06 18:33:22.000436] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:51.299 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:51.300 "name": "raid_bdev1", 00:34:51.300 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:51.300 "strip_size_kb": 64, 00:34:51.300 "state": "online", 00:34:51.300 "raid_level": "raid5f", 00:34:51.300 "superblock": true, 00:34:51.300 "num_base_bdevs": 3, 00:34:51.300 "num_base_bdevs_discovered": 2, 00:34:51.300 "num_base_bdevs_operational": 2, 00:34:51.300 "base_bdevs_list": [ 00:34:51.300 { 00:34:51.300 "name": null, 00:34:51.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:51.300 "is_configured": false, 00:34:51.300 "data_offset": 0, 00:34:51.300 "data_size": 63488 00:34:51.300 }, 00:34:51.300 { 00:34:51.300 "name": "BaseBdev2", 00:34:51.300 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:51.300 "is_configured": true, 00:34:51.300 "data_offset": 2048, 00:34:51.300 "data_size": 63488 00:34:51.300 }, 00:34:51.300 { 00:34:51.300 "name": "BaseBdev3", 00:34:51.300 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:51.300 "is_configured": true, 00:34:51.300 "data_offset": 2048, 00:34:51.300 "data_size": 63488 00:34:51.300 } 00:34:51.300 ] 00:34:51.300 }' 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:51.300 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:51.559 "name": "raid_bdev1", 00:34:51.559 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:51.559 "strip_size_kb": 64, 00:34:51.559 "state": "online", 00:34:51.559 "raid_level": "raid5f", 00:34:51.559 "superblock": true, 00:34:51.559 "num_base_bdevs": 3, 00:34:51.559 "num_base_bdevs_discovered": 2, 00:34:51.559 "num_base_bdevs_operational": 2, 00:34:51.559 "base_bdevs_list": [ 00:34:51.559 { 00:34:51.559 "name": null, 00:34:51.559 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:51.559 "is_configured": false, 00:34:51.559 "data_offset": 0, 00:34:51.559 "data_size": 63488 00:34:51.559 }, 00:34:51.559 { 00:34:51.559 "name": "BaseBdev2", 00:34:51.559 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:51.559 "is_configured": true, 00:34:51.559 "data_offset": 2048, 00:34:51.559 "data_size": 63488 00:34:51.559 }, 00:34:51.559 { 00:34:51.559 "name": "BaseBdev3", 00:34:51.559 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:51.559 "is_configured": true, 00:34:51.559 "data_offset": 2048, 00:34:51.559 "data_size": 63488 00:34:51.559 } 00:34:51.559 ] 00:34:51.559 }' 00:34:51.559 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:51.818 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:51.818 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:51.818 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:51.819 [2024-12-06 18:33:22.590808] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:34:51.819 [2024-12-06 18:33:22.590870] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:34:51.819 [2024-12-06 18:33:22.590903] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:34:51.819 [2024-12-06 18:33:22.590916] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:34:51.819 [2024-12-06 18:33:22.591502] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:34:51.819 [2024-12-06 18:33:22.591526] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:34:51.819 [2024-12-06 18:33:22.591619] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:34:51.819 [2024-12-06 18:33:22.591636] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:51.819 [2024-12-06 18:33:22.591664] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:51.819 [2024-12-06 18:33:22.591678] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:34:51.819 BaseBdev1 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.819 18:33:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:52.757 "name": "raid_bdev1", 00:34:52.757 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:52.757 "strip_size_kb": 64, 00:34:52.757 "state": "online", 00:34:52.757 "raid_level": "raid5f", 00:34:52.757 "superblock": true, 00:34:52.757 "num_base_bdevs": 3, 00:34:52.757 "num_base_bdevs_discovered": 2, 00:34:52.757 "num_base_bdevs_operational": 2, 00:34:52.757 "base_bdevs_list": [ 00:34:52.757 { 00:34:52.757 "name": null, 00:34:52.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:52.757 "is_configured": false, 00:34:52.757 "data_offset": 0, 00:34:52.757 "data_size": 63488 00:34:52.757 }, 00:34:52.757 { 00:34:52.757 "name": "BaseBdev2", 00:34:52.757 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:52.757 "is_configured": true, 00:34:52.757 "data_offset": 2048, 00:34:52.757 "data_size": 63488 00:34:52.757 }, 00:34:52.757 { 00:34:52.757 "name": "BaseBdev3", 00:34:52.757 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:52.757 "is_configured": true, 00:34:52.757 "data_offset": 2048, 00:34:52.757 "data_size": 63488 00:34:52.757 } 00:34:52.757 ] 00:34:52.757 }' 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:52.757 18:33:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:53.326 "name": "raid_bdev1", 00:34:53.326 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:53.326 "strip_size_kb": 64, 00:34:53.326 "state": "online", 00:34:53.326 "raid_level": "raid5f", 00:34:53.326 "superblock": true, 00:34:53.326 "num_base_bdevs": 3, 00:34:53.326 "num_base_bdevs_discovered": 2, 00:34:53.326 "num_base_bdevs_operational": 2, 00:34:53.326 "base_bdevs_list": [ 00:34:53.326 { 00:34:53.326 "name": null, 00:34:53.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:53.326 "is_configured": false, 00:34:53.326 "data_offset": 0, 00:34:53.326 "data_size": 63488 00:34:53.326 }, 00:34:53.326 { 00:34:53.326 "name": "BaseBdev2", 00:34:53.326 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:53.326 "is_configured": true, 00:34:53.326 "data_offset": 2048, 00:34:53.326 "data_size": 63488 00:34:53.326 }, 00:34:53.326 { 00:34:53.326 "name": "BaseBdev3", 00:34:53.326 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:53.326 "is_configured": true, 00:34:53.326 "data_offset": 2048, 00:34:53.326 "data_size": 63488 00:34:53.326 } 00:34:53.326 ] 00:34:53.326 }' 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.326 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:53.326 [2024-12-06 18:33:24.173366] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:53.326 [2024-12-06 18:33:24.173586] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:34:53.326 [2024-12-06 18:33:24.173607] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:34:53.326 request: 00:34:53.326 { 00:34:53.326 "base_bdev": "BaseBdev1", 00:34:53.326 "raid_bdev": "raid_bdev1", 00:34:53.327 "method": "bdev_raid_add_base_bdev", 00:34:53.327 "req_id": 1 00:34:53.327 } 00:34:53.327 Got JSON-RPC error response 00:34:53.327 response: 00:34:53.327 { 00:34:53.327 "code": -22, 00:34:53.327 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:34:53.327 } 00:34:53.327 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:53.327 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:34:53.327 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:53.327 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:53.327 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:53.327 18:33:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.266 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:54.525 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.525 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:54.525 "name": "raid_bdev1", 00:34:54.525 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:54.525 "strip_size_kb": 64, 00:34:54.525 "state": "online", 00:34:54.525 "raid_level": "raid5f", 00:34:54.525 "superblock": true, 00:34:54.525 "num_base_bdevs": 3, 00:34:54.525 "num_base_bdevs_discovered": 2, 00:34:54.525 "num_base_bdevs_operational": 2, 00:34:54.525 "base_bdevs_list": [ 00:34:54.525 { 00:34:54.525 "name": null, 00:34:54.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:54.525 "is_configured": false, 00:34:54.525 "data_offset": 0, 00:34:54.525 "data_size": 63488 00:34:54.525 }, 00:34:54.525 { 00:34:54.525 "name": "BaseBdev2", 00:34:54.525 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:54.525 "is_configured": true, 00:34:54.525 "data_offset": 2048, 00:34:54.525 "data_size": 63488 00:34:54.525 }, 00:34:54.525 { 00:34:54.525 "name": "BaseBdev3", 00:34:54.525 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:54.525 "is_configured": true, 00:34:54.525 "data_offset": 2048, 00:34:54.525 "data_size": 63488 00:34:54.525 } 00:34:54.525 ] 00:34:54.525 }' 00:34:54.525 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:54.525 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:34:54.784 "name": "raid_bdev1", 00:34:54.784 "uuid": "3e11fb18-aad5-43ff-875f-85cc5fe0e45e", 00:34:54.784 "strip_size_kb": 64, 00:34:54.784 "state": "online", 00:34:54.784 "raid_level": "raid5f", 00:34:54.784 "superblock": true, 00:34:54.784 "num_base_bdevs": 3, 00:34:54.784 "num_base_bdevs_discovered": 2, 00:34:54.784 "num_base_bdevs_operational": 2, 00:34:54.784 "base_bdevs_list": [ 00:34:54.784 { 00:34:54.784 "name": null, 00:34:54.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:54.784 "is_configured": false, 00:34:54.784 "data_offset": 0, 00:34:54.784 "data_size": 63488 00:34:54.784 }, 00:34:54.784 { 00:34:54.784 "name": "BaseBdev2", 00:34:54.784 "uuid": "fc1d9a4e-1e11-55ef-ab37-a0812d47ffdc", 00:34:54.784 "is_configured": true, 00:34:54.784 "data_offset": 2048, 00:34:54.784 "data_size": 63488 00:34:54.784 }, 00:34:54.784 { 00:34:54.784 "name": "BaseBdev3", 00:34:54.784 "uuid": "c38721e2-c463-5bc9-abb2-e664102cff07", 00:34:54.784 "is_configured": true, 00:34:54.784 "data_offset": 2048, 00:34:54.784 "data_size": 63488 00:34:54.784 } 00:34:54.784 ] 00:34:54.784 }' 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 81743 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 81743 ']' 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 81743 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81743 00:34:54.784 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:54.785 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:54.785 killing process with pid 81743 00:34:54.785 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81743' 00:34:54.785 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 81743 00:34:54.785 Received shutdown signal, test time was about 60.000000 seconds 00:34:54.785 00:34:54.785 Latency(us) 00:34:54.785 [2024-12-06T18:33:25.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:54.785 [2024-12-06T18:33:25.734Z] =================================================================================================================== 00:34:54.785 [2024-12-06T18:33:25.734Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:34:54.785 [2024-12-06 18:33:25.715579] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:34:54.785 18:33:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 81743 00:34:54.785 [2024-12-06 18:33:25.715733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:34:54.785 [2024-12-06 18:33:25.715813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:34:54.785 [2024-12-06 18:33:25.715829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:34:55.354 [2024-12-06 18:33:26.127039] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:34:56.735 18:33:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:34:56.735 00:34:56.735 real 0m23.049s 00:34:56.735 user 0m29.028s 00:34:56.735 sys 0m3.196s 00:34:56.736 18:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.736 18:33:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:34:56.736 ************************************ 00:34:56.736 END TEST raid5f_rebuild_test_sb 00:34:56.736 ************************************ 00:34:56.736 18:33:27 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:34:56.736 18:33:27 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:34:56.736 18:33:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:56.736 18:33:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:56.736 18:33:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:34:56.736 ************************************ 00:34:56.736 START TEST raid5f_state_function_test 00:34:56.736 ************************************ 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82492 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:34:56.736 Process raid pid: 82492 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82492' 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82492 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82492 ']' 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:56.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:56.736 18:33:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:56.736 [2024-12-06 18:33:27.507634] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:34:56.736 [2024-12-06 18:33:27.507777] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:56.996 [2024-12-06 18:33:27.692545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.996 [2024-12-06 18:33:27.818477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:57.256 [2024-12-06 18:33:28.059023] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:57.256 [2024-12-06 18:33:28.059075] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.516 [2024-12-06 18:33:28.336012] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:57.516 [2024-12-06 18:33:28.336081] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:57.516 [2024-12-06 18:33:28.336094] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:57.516 [2024-12-06 18:33:28.336107] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:57.516 [2024-12-06 18:33:28.336115] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:57.516 [2024-12-06 18:33:28.336128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:57.516 [2024-12-06 18:33:28.336136] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:57.516 [2024-12-06 18:33:28.336160] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.516 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:57.516 "name": "Existed_Raid", 00:34:57.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.516 "strip_size_kb": 64, 00:34:57.516 "state": "configuring", 00:34:57.516 "raid_level": "raid5f", 00:34:57.516 "superblock": false, 00:34:57.516 "num_base_bdevs": 4, 00:34:57.516 "num_base_bdevs_discovered": 0, 00:34:57.516 "num_base_bdevs_operational": 4, 00:34:57.516 "base_bdevs_list": [ 00:34:57.516 { 00:34:57.516 "name": "BaseBdev1", 00:34:57.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.516 "is_configured": false, 00:34:57.516 "data_offset": 0, 00:34:57.516 "data_size": 0 00:34:57.516 }, 00:34:57.516 { 00:34:57.516 "name": "BaseBdev2", 00:34:57.516 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.516 "is_configured": false, 00:34:57.517 "data_offset": 0, 00:34:57.517 "data_size": 0 00:34:57.517 }, 00:34:57.517 { 00:34:57.517 "name": "BaseBdev3", 00:34:57.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.517 "is_configured": false, 00:34:57.517 "data_offset": 0, 00:34:57.517 "data_size": 0 00:34:57.517 }, 00:34:57.517 { 00:34:57.517 "name": "BaseBdev4", 00:34:57.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:57.517 "is_configured": false, 00:34:57.517 "data_offset": 0, 00:34:57.517 "data_size": 0 00:34:57.517 } 00:34:57.517 ] 00:34:57.517 }' 00:34:57.517 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:57.517 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.086 [2024-12-06 18:33:28.775342] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:58.086 [2024-12-06 18:33:28.775393] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.086 [2024-12-06 18:33:28.787320] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:34:58.086 [2024-12-06 18:33:28.787372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:34:58.086 [2024-12-06 18:33:28.787383] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:58.086 [2024-12-06 18:33:28.787396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:58.086 [2024-12-06 18:33:28.787404] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:58.086 [2024-12-06 18:33:28.787417] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:58.086 [2024-12-06 18:33:28.787424] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:58.086 [2024-12-06 18:33:28.787436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.086 [2024-12-06 18:33:28.838780] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:58.086 BaseBdev1 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.086 [ 00:34:58.086 { 00:34:58.086 "name": "BaseBdev1", 00:34:58.086 "aliases": [ 00:34:58.086 "9e568b52-925a-4872-95a5-c6bc63f7b6f6" 00:34:58.086 ], 00:34:58.086 "product_name": "Malloc disk", 00:34:58.086 "block_size": 512, 00:34:58.086 "num_blocks": 65536, 00:34:58.086 "uuid": "9e568b52-925a-4872-95a5-c6bc63f7b6f6", 00:34:58.086 "assigned_rate_limits": { 00:34:58.086 "rw_ios_per_sec": 0, 00:34:58.086 "rw_mbytes_per_sec": 0, 00:34:58.086 "r_mbytes_per_sec": 0, 00:34:58.086 "w_mbytes_per_sec": 0 00:34:58.086 }, 00:34:58.086 "claimed": true, 00:34:58.086 "claim_type": "exclusive_write", 00:34:58.086 "zoned": false, 00:34:58.086 "supported_io_types": { 00:34:58.086 "read": true, 00:34:58.086 "write": true, 00:34:58.086 "unmap": true, 00:34:58.086 "flush": true, 00:34:58.086 "reset": true, 00:34:58.086 "nvme_admin": false, 00:34:58.086 "nvme_io": false, 00:34:58.086 "nvme_io_md": false, 00:34:58.086 "write_zeroes": true, 00:34:58.086 "zcopy": true, 00:34:58.086 "get_zone_info": false, 00:34:58.086 "zone_management": false, 00:34:58.086 "zone_append": false, 00:34:58.086 "compare": false, 00:34:58.086 "compare_and_write": false, 00:34:58.086 "abort": true, 00:34:58.086 "seek_hole": false, 00:34:58.086 "seek_data": false, 00:34:58.086 "copy": true, 00:34:58.086 "nvme_iov_md": false 00:34:58.086 }, 00:34:58.086 "memory_domains": [ 00:34:58.086 { 00:34:58.086 "dma_device_id": "system", 00:34:58.086 "dma_device_type": 1 00:34:58.086 }, 00:34:58.086 { 00:34:58.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.086 "dma_device_type": 2 00:34:58.086 } 00:34:58.086 ], 00:34:58.086 "driver_specific": {} 00:34:58.086 } 00:34:58.086 ] 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.086 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:58.086 "name": "Existed_Raid", 00:34:58.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.086 "strip_size_kb": 64, 00:34:58.086 "state": "configuring", 00:34:58.086 "raid_level": "raid5f", 00:34:58.086 "superblock": false, 00:34:58.086 "num_base_bdevs": 4, 00:34:58.086 "num_base_bdevs_discovered": 1, 00:34:58.086 "num_base_bdevs_operational": 4, 00:34:58.086 "base_bdevs_list": [ 00:34:58.086 { 00:34:58.086 "name": "BaseBdev1", 00:34:58.086 "uuid": "9e568b52-925a-4872-95a5-c6bc63f7b6f6", 00:34:58.086 "is_configured": true, 00:34:58.086 "data_offset": 0, 00:34:58.086 "data_size": 65536 00:34:58.086 }, 00:34:58.086 { 00:34:58.086 "name": "BaseBdev2", 00:34:58.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.086 "is_configured": false, 00:34:58.086 "data_offset": 0, 00:34:58.086 "data_size": 0 00:34:58.086 }, 00:34:58.086 { 00:34:58.086 "name": "BaseBdev3", 00:34:58.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.086 "is_configured": false, 00:34:58.086 "data_offset": 0, 00:34:58.086 "data_size": 0 00:34:58.086 }, 00:34:58.086 { 00:34:58.086 "name": "BaseBdev4", 00:34:58.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.086 "is_configured": false, 00:34:58.086 "data_offset": 0, 00:34:58.086 "data_size": 0 00:34:58.087 } 00:34:58.087 ] 00:34:58.087 }' 00:34:58.087 18:33:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:58.087 18:33:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.346 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:34:58.346 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.346 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.346 [2024-12-06 18:33:29.294262] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:34:58.346 [2024-12-06 18:33:29.294310] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.607 [2024-12-06 18:33:29.306332] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:34:58.607 [2024-12-06 18:33:29.308698] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:34:58.607 [2024-12-06 18:33:29.308746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:34:58.607 [2024-12-06 18:33:29.308758] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:34:58.607 [2024-12-06 18:33:29.308772] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:34:58.607 [2024-12-06 18:33:29.308780] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:34:58.607 [2024-12-06 18:33:29.308792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:58.607 "name": "Existed_Raid", 00:34:58.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.607 "strip_size_kb": 64, 00:34:58.607 "state": "configuring", 00:34:58.607 "raid_level": "raid5f", 00:34:58.607 "superblock": false, 00:34:58.607 "num_base_bdevs": 4, 00:34:58.607 "num_base_bdevs_discovered": 1, 00:34:58.607 "num_base_bdevs_operational": 4, 00:34:58.607 "base_bdevs_list": [ 00:34:58.607 { 00:34:58.607 "name": "BaseBdev1", 00:34:58.607 "uuid": "9e568b52-925a-4872-95a5-c6bc63f7b6f6", 00:34:58.607 "is_configured": true, 00:34:58.607 "data_offset": 0, 00:34:58.607 "data_size": 65536 00:34:58.607 }, 00:34:58.607 { 00:34:58.607 "name": "BaseBdev2", 00:34:58.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.607 "is_configured": false, 00:34:58.607 "data_offset": 0, 00:34:58.607 "data_size": 0 00:34:58.607 }, 00:34:58.607 { 00:34:58.607 "name": "BaseBdev3", 00:34:58.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.607 "is_configured": false, 00:34:58.607 "data_offset": 0, 00:34:58.607 "data_size": 0 00:34:58.607 }, 00:34:58.607 { 00:34:58.607 "name": "BaseBdev4", 00:34:58.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.607 "is_configured": false, 00:34:58.607 "data_offset": 0, 00:34:58.607 "data_size": 0 00:34:58.607 } 00:34:58.607 ] 00:34:58.607 }' 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:58.607 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.867 [2024-12-06 18:33:29.719116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:34:58.867 BaseBdev2 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.867 [ 00:34:58.867 { 00:34:58.867 "name": "BaseBdev2", 00:34:58.867 "aliases": [ 00:34:58.867 "ff74431a-ce40-40de-9c9b-df0ee2ebb7a3" 00:34:58.867 ], 00:34:58.867 "product_name": "Malloc disk", 00:34:58.867 "block_size": 512, 00:34:58.867 "num_blocks": 65536, 00:34:58.867 "uuid": "ff74431a-ce40-40de-9c9b-df0ee2ebb7a3", 00:34:58.867 "assigned_rate_limits": { 00:34:58.867 "rw_ios_per_sec": 0, 00:34:58.867 "rw_mbytes_per_sec": 0, 00:34:58.867 "r_mbytes_per_sec": 0, 00:34:58.867 "w_mbytes_per_sec": 0 00:34:58.867 }, 00:34:58.867 "claimed": true, 00:34:58.867 "claim_type": "exclusive_write", 00:34:58.867 "zoned": false, 00:34:58.867 "supported_io_types": { 00:34:58.867 "read": true, 00:34:58.867 "write": true, 00:34:58.867 "unmap": true, 00:34:58.867 "flush": true, 00:34:58.867 "reset": true, 00:34:58.867 "nvme_admin": false, 00:34:58.867 "nvme_io": false, 00:34:58.867 "nvme_io_md": false, 00:34:58.867 "write_zeroes": true, 00:34:58.867 "zcopy": true, 00:34:58.867 "get_zone_info": false, 00:34:58.867 "zone_management": false, 00:34:58.867 "zone_append": false, 00:34:58.867 "compare": false, 00:34:58.867 "compare_and_write": false, 00:34:58.867 "abort": true, 00:34:58.867 "seek_hole": false, 00:34:58.867 "seek_data": false, 00:34:58.867 "copy": true, 00:34:58.867 "nvme_iov_md": false 00:34:58.867 }, 00:34:58.867 "memory_domains": [ 00:34:58.867 { 00:34:58.867 "dma_device_id": "system", 00:34:58.867 "dma_device_type": 1 00:34:58.867 }, 00:34:58.867 { 00:34:58.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:58.867 "dma_device_type": 2 00:34:58.867 } 00:34:58.867 ], 00:34:58.867 "driver_specific": {} 00:34:58.867 } 00:34:58.867 ] 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:58.867 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:58.868 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.868 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:58.868 "name": "Existed_Raid", 00:34:58.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.868 "strip_size_kb": 64, 00:34:58.868 "state": "configuring", 00:34:58.868 "raid_level": "raid5f", 00:34:58.868 "superblock": false, 00:34:58.868 "num_base_bdevs": 4, 00:34:58.868 "num_base_bdevs_discovered": 2, 00:34:58.868 "num_base_bdevs_operational": 4, 00:34:58.868 "base_bdevs_list": [ 00:34:58.868 { 00:34:58.868 "name": "BaseBdev1", 00:34:58.868 "uuid": "9e568b52-925a-4872-95a5-c6bc63f7b6f6", 00:34:58.868 "is_configured": true, 00:34:58.868 "data_offset": 0, 00:34:58.868 "data_size": 65536 00:34:58.868 }, 00:34:58.868 { 00:34:58.868 "name": "BaseBdev2", 00:34:58.868 "uuid": "ff74431a-ce40-40de-9c9b-df0ee2ebb7a3", 00:34:58.868 "is_configured": true, 00:34:58.868 "data_offset": 0, 00:34:58.868 "data_size": 65536 00:34:58.868 }, 00:34:58.868 { 00:34:58.868 "name": "BaseBdev3", 00:34:58.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.868 "is_configured": false, 00:34:58.868 "data_offset": 0, 00:34:58.868 "data_size": 0 00:34:58.868 }, 00:34:58.868 { 00:34:58.868 "name": "BaseBdev4", 00:34:58.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:58.868 "is_configured": false, 00:34:58.868 "data_offset": 0, 00:34:58.868 "data_size": 0 00:34:58.868 } 00:34:58.868 ] 00:34:58.868 }' 00:34:58.868 18:33:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:58.868 18:33:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.437 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:34:59.437 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.437 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.438 [2024-12-06 18:33:30.165459] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:34:59.438 BaseBdev3 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.438 [ 00:34:59.438 { 00:34:59.438 "name": "BaseBdev3", 00:34:59.438 "aliases": [ 00:34:59.438 "f25ac798-f637-4a1f-8bae-b9ae6cd3b41f" 00:34:59.438 ], 00:34:59.438 "product_name": "Malloc disk", 00:34:59.438 "block_size": 512, 00:34:59.438 "num_blocks": 65536, 00:34:59.438 "uuid": "f25ac798-f637-4a1f-8bae-b9ae6cd3b41f", 00:34:59.438 "assigned_rate_limits": { 00:34:59.438 "rw_ios_per_sec": 0, 00:34:59.438 "rw_mbytes_per_sec": 0, 00:34:59.438 "r_mbytes_per_sec": 0, 00:34:59.438 "w_mbytes_per_sec": 0 00:34:59.438 }, 00:34:59.438 "claimed": true, 00:34:59.438 "claim_type": "exclusive_write", 00:34:59.438 "zoned": false, 00:34:59.438 "supported_io_types": { 00:34:59.438 "read": true, 00:34:59.438 "write": true, 00:34:59.438 "unmap": true, 00:34:59.438 "flush": true, 00:34:59.438 "reset": true, 00:34:59.438 "nvme_admin": false, 00:34:59.438 "nvme_io": false, 00:34:59.438 "nvme_io_md": false, 00:34:59.438 "write_zeroes": true, 00:34:59.438 "zcopy": true, 00:34:59.438 "get_zone_info": false, 00:34:59.438 "zone_management": false, 00:34:59.438 "zone_append": false, 00:34:59.438 "compare": false, 00:34:59.438 "compare_and_write": false, 00:34:59.438 "abort": true, 00:34:59.438 "seek_hole": false, 00:34:59.438 "seek_data": false, 00:34:59.438 "copy": true, 00:34:59.438 "nvme_iov_md": false 00:34:59.438 }, 00:34:59.438 "memory_domains": [ 00:34:59.438 { 00:34:59.438 "dma_device_id": "system", 00:34:59.438 "dma_device_type": 1 00:34:59.438 }, 00:34:59.438 { 00:34:59.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.438 "dma_device_type": 2 00:34:59.438 } 00:34:59.438 ], 00:34:59.438 "driver_specific": {} 00:34:59.438 } 00:34:59.438 ] 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:59.438 "name": "Existed_Raid", 00:34:59.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.438 "strip_size_kb": 64, 00:34:59.438 "state": "configuring", 00:34:59.438 "raid_level": "raid5f", 00:34:59.438 "superblock": false, 00:34:59.438 "num_base_bdevs": 4, 00:34:59.438 "num_base_bdevs_discovered": 3, 00:34:59.438 "num_base_bdevs_operational": 4, 00:34:59.438 "base_bdevs_list": [ 00:34:59.438 { 00:34:59.438 "name": "BaseBdev1", 00:34:59.438 "uuid": "9e568b52-925a-4872-95a5-c6bc63f7b6f6", 00:34:59.438 "is_configured": true, 00:34:59.438 "data_offset": 0, 00:34:59.438 "data_size": 65536 00:34:59.438 }, 00:34:59.438 { 00:34:59.438 "name": "BaseBdev2", 00:34:59.438 "uuid": "ff74431a-ce40-40de-9c9b-df0ee2ebb7a3", 00:34:59.438 "is_configured": true, 00:34:59.438 "data_offset": 0, 00:34:59.438 "data_size": 65536 00:34:59.438 }, 00:34:59.438 { 00:34:59.438 "name": "BaseBdev3", 00:34:59.438 "uuid": "f25ac798-f637-4a1f-8bae-b9ae6cd3b41f", 00:34:59.438 "is_configured": true, 00:34:59.438 "data_offset": 0, 00:34:59.438 "data_size": 65536 00:34:59.438 }, 00:34:59.438 { 00:34:59.438 "name": "BaseBdev4", 00:34:59.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:34:59.438 "is_configured": false, 00:34:59.438 "data_offset": 0, 00:34:59.438 "data_size": 0 00:34:59.438 } 00:34:59.438 ] 00:34:59.438 }' 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:59.438 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.698 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:34:59.698 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.698 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.958 [2024-12-06 18:33:30.655124] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:34:59.958 [2024-12-06 18:33:30.655221] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:34:59.958 [2024-12-06 18:33:30.655234] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:34:59.958 [2024-12-06 18:33:30.655561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:34:59.958 [2024-12-06 18:33:30.663130] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:34:59.958 [2024-12-06 18:33:30.663170] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:34:59.958 [2024-12-06 18:33:30.663472] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:34:59.958 BaseBdev4 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.958 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.958 [ 00:34:59.958 { 00:34:59.958 "name": "BaseBdev4", 00:34:59.958 "aliases": [ 00:34:59.958 "e28ce4fa-920c-4ace-a572-a9affe831dce" 00:34:59.958 ], 00:34:59.958 "product_name": "Malloc disk", 00:34:59.958 "block_size": 512, 00:34:59.958 "num_blocks": 65536, 00:34:59.958 "uuid": "e28ce4fa-920c-4ace-a572-a9affe831dce", 00:34:59.958 "assigned_rate_limits": { 00:34:59.958 "rw_ios_per_sec": 0, 00:34:59.958 "rw_mbytes_per_sec": 0, 00:34:59.958 "r_mbytes_per_sec": 0, 00:34:59.958 "w_mbytes_per_sec": 0 00:34:59.958 }, 00:34:59.958 "claimed": true, 00:34:59.958 "claim_type": "exclusive_write", 00:34:59.958 "zoned": false, 00:34:59.958 "supported_io_types": { 00:34:59.958 "read": true, 00:34:59.958 "write": true, 00:34:59.958 "unmap": true, 00:34:59.958 "flush": true, 00:34:59.958 "reset": true, 00:34:59.958 "nvme_admin": false, 00:34:59.958 "nvme_io": false, 00:34:59.958 "nvme_io_md": false, 00:34:59.958 "write_zeroes": true, 00:34:59.958 "zcopy": true, 00:34:59.958 "get_zone_info": false, 00:34:59.958 "zone_management": false, 00:34:59.958 "zone_append": false, 00:34:59.958 "compare": false, 00:34:59.958 "compare_and_write": false, 00:34:59.958 "abort": true, 00:34:59.958 "seek_hole": false, 00:34:59.958 "seek_data": false, 00:34:59.959 "copy": true, 00:34:59.959 "nvme_iov_md": false 00:34:59.959 }, 00:34:59.959 "memory_domains": [ 00:34:59.959 { 00:34:59.959 "dma_device_id": "system", 00:34:59.959 "dma_device_type": 1 00:34:59.959 }, 00:34:59.959 { 00:34:59.959 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:34:59.959 "dma_device_type": 2 00:34:59.959 } 00:34:59.959 ], 00:34:59.959 "driver_specific": {} 00:34:59.959 } 00:34:59.959 ] 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:34:59.959 "name": "Existed_Raid", 00:34:59.959 "uuid": "536ac5d8-8fe2-4f8b-826d-8f16e178ed55", 00:34:59.959 "strip_size_kb": 64, 00:34:59.959 "state": "online", 00:34:59.959 "raid_level": "raid5f", 00:34:59.959 "superblock": false, 00:34:59.959 "num_base_bdevs": 4, 00:34:59.959 "num_base_bdevs_discovered": 4, 00:34:59.959 "num_base_bdevs_operational": 4, 00:34:59.959 "base_bdevs_list": [ 00:34:59.959 { 00:34:59.959 "name": "BaseBdev1", 00:34:59.959 "uuid": "9e568b52-925a-4872-95a5-c6bc63f7b6f6", 00:34:59.959 "is_configured": true, 00:34:59.959 "data_offset": 0, 00:34:59.959 "data_size": 65536 00:34:59.959 }, 00:34:59.959 { 00:34:59.959 "name": "BaseBdev2", 00:34:59.959 "uuid": "ff74431a-ce40-40de-9c9b-df0ee2ebb7a3", 00:34:59.959 "is_configured": true, 00:34:59.959 "data_offset": 0, 00:34:59.959 "data_size": 65536 00:34:59.959 }, 00:34:59.959 { 00:34:59.959 "name": "BaseBdev3", 00:34:59.959 "uuid": "f25ac798-f637-4a1f-8bae-b9ae6cd3b41f", 00:34:59.959 "is_configured": true, 00:34:59.959 "data_offset": 0, 00:34:59.959 "data_size": 65536 00:34:59.959 }, 00:34:59.959 { 00:34:59.959 "name": "BaseBdev4", 00:34:59.959 "uuid": "e28ce4fa-920c-4ace-a572-a9affe831dce", 00:34:59.959 "is_configured": true, 00:34:59.959 "data_offset": 0, 00:34:59.959 "data_size": 65536 00:34:59.959 } 00:34:59.959 ] 00:34:59.959 }' 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:34:59.959 18:33:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.218 [2024-12-06 18:33:31.100322] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:00.218 "name": "Existed_Raid", 00:35:00.218 "aliases": [ 00:35:00.218 "536ac5d8-8fe2-4f8b-826d-8f16e178ed55" 00:35:00.218 ], 00:35:00.218 "product_name": "Raid Volume", 00:35:00.218 "block_size": 512, 00:35:00.218 "num_blocks": 196608, 00:35:00.218 "uuid": "536ac5d8-8fe2-4f8b-826d-8f16e178ed55", 00:35:00.218 "assigned_rate_limits": { 00:35:00.218 "rw_ios_per_sec": 0, 00:35:00.218 "rw_mbytes_per_sec": 0, 00:35:00.218 "r_mbytes_per_sec": 0, 00:35:00.218 "w_mbytes_per_sec": 0 00:35:00.218 }, 00:35:00.218 "claimed": false, 00:35:00.218 "zoned": false, 00:35:00.218 "supported_io_types": { 00:35:00.218 "read": true, 00:35:00.218 "write": true, 00:35:00.218 "unmap": false, 00:35:00.218 "flush": false, 00:35:00.218 "reset": true, 00:35:00.218 "nvme_admin": false, 00:35:00.218 "nvme_io": false, 00:35:00.218 "nvme_io_md": false, 00:35:00.218 "write_zeroes": true, 00:35:00.218 "zcopy": false, 00:35:00.218 "get_zone_info": false, 00:35:00.218 "zone_management": false, 00:35:00.218 "zone_append": false, 00:35:00.218 "compare": false, 00:35:00.218 "compare_and_write": false, 00:35:00.218 "abort": false, 00:35:00.218 "seek_hole": false, 00:35:00.218 "seek_data": false, 00:35:00.218 "copy": false, 00:35:00.218 "nvme_iov_md": false 00:35:00.218 }, 00:35:00.218 "driver_specific": { 00:35:00.218 "raid": { 00:35:00.218 "uuid": "536ac5d8-8fe2-4f8b-826d-8f16e178ed55", 00:35:00.218 "strip_size_kb": 64, 00:35:00.218 "state": "online", 00:35:00.218 "raid_level": "raid5f", 00:35:00.218 "superblock": false, 00:35:00.218 "num_base_bdevs": 4, 00:35:00.218 "num_base_bdevs_discovered": 4, 00:35:00.218 "num_base_bdevs_operational": 4, 00:35:00.218 "base_bdevs_list": [ 00:35:00.218 { 00:35:00.218 "name": "BaseBdev1", 00:35:00.218 "uuid": "9e568b52-925a-4872-95a5-c6bc63f7b6f6", 00:35:00.218 "is_configured": true, 00:35:00.218 "data_offset": 0, 00:35:00.218 "data_size": 65536 00:35:00.218 }, 00:35:00.218 { 00:35:00.218 "name": "BaseBdev2", 00:35:00.218 "uuid": "ff74431a-ce40-40de-9c9b-df0ee2ebb7a3", 00:35:00.218 "is_configured": true, 00:35:00.218 "data_offset": 0, 00:35:00.218 "data_size": 65536 00:35:00.218 }, 00:35:00.218 { 00:35:00.218 "name": "BaseBdev3", 00:35:00.218 "uuid": "f25ac798-f637-4a1f-8bae-b9ae6cd3b41f", 00:35:00.218 "is_configured": true, 00:35:00.218 "data_offset": 0, 00:35:00.218 "data_size": 65536 00:35:00.218 }, 00:35:00.218 { 00:35:00.218 "name": "BaseBdev4", 00:35:00.218 "uuid": "e28ce4fa-920c-4ace-a572-a9affe831dce", 00:35:00.218 "is_configured": true, 00:35:00.218 "data_offset": 0, 00:35:00.218 "data_size": 65536 00:35:00.218 } 00:35:00.218 ] 00:35:00.218 } 00:35:00.218 } 00:35:00.218 }' 00:35:00.218 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:00.478 BaseBdev2 00:35:00.478 BaseBdev3 00:35:00.478 BaseBdev4' 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.478 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.479 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.479 [2024-12-06 18:33:31.379720] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:00.738 "name": "Existed_Raid", 00:35:00.738 "uuid": "536ac5d8-8fe2-4f8b-826d-8f16e178ed55", 00:35:00.738 "strip_size_kb": 64, 00:35:00.738 "state": "online", 00:35:00.738 "raid_level": "raid5f", 00:35:00.738 "superblock": false, 00:35:00.738 "num_base_bdevs": 4, 00:35:00.738 "num_base_bdevs_discovered": 3, 00:35:00.738 "num_base_bdevs_operational": 3, 00:35:00.738 "base_bdevs_list": [ 00:35:00.738 { 00:35:00.738 "name": null, 00:35:00.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:00.738 "is_configured": false, 00:35:00.738 "data_offset": 0, 00:35:00.738 "data_size": 65536 00:35:00.738 }, 00:35:00.738 { 00:35:00.738 "name": "BaseBdev2", 00:35:00.738 "uuid": "ff74431a-ce40-40de-9c9b-df0ee2ebb7a3", 00:35:00.738 "is_configured": true, 00:35:00.738 "data_offset": 0, 00:35:00.738 "data_size": 65536 00:35:00.738 }, 00:35:00.738 { 00:35:00.738 "name": "BaseBdev3", 00:35:00.738 "uuid": "f25ac798-f637-4a1f-8bae-b9ae6cd3b41f", 00:35:00.738 "is_configured": true, 00:35:00.738 "data_offset": 0, 00:35:00.738 "data_size": 65536 00:35:00.738 }, 00:35:00.738 { 00:35:00.738 "name": "BaseBdev4", 00:35:00.738 "uuid": "e28ce4fa-920c-4ace-a572-a9affe831dce", 00:35:00.738 "is_configured": true, 00:35:00.738 "data_offset": 0, 00:35:00.738 "data_size": 65536 00:35:00.738 } 00:35:00.738 ] 00:35:00.738 }' 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:00.738 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.998 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:00.998 [2024-12-06 18:33:31.880450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:00.998 [2024-12-06 18:33:31.880578] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:01.257 [2024-12-06 18:33:31.986088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:01.257 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.257 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:01.258 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:01.258 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.258 18:33:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:01.258 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.258 18:33:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.258 [2024-12-06 18:33:32.038039] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.258 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.258 [2024-12-06 18:33:32.198343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:01.258 [2024-12-06 18:33:32.198408] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.518 BaseBdev2 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.518 [ 00:35:01.518 { 00:35:01.518 "name": "BaseBdev2", 00:35:01.518 "aliases": [ 00:35:01.518 "18b83546-d484-4337-a8b3-258bb24a6472" 00:35:01.518 ], 00:35:01.518 "product_name": "Malloc disk", 00:35:01.518 "block_size": 512, 00:35:01.518 "num_blocks": 65536, 00:35:01.518 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:01.518 "assigned_rate_limits": { 00:35:01.518 "rw_ios_per_sec": 0, 00:35:01.518 "rw_mbytes_per_sec": 0, 00:35:01.518 "r_mbytes_per_sec": 0, 00:35:01.518 "w_mbytes_per_sec": 0 00:35:01.518 }, 00:35:01.518 "claimed": false, 00:35:01.518 "zoned": false, 00:35:01.518 "supported_io_types": { 00:35:01.518 "read": true, 00:35:01.518 "write": true, 00:35:01.518 "unmap": true, 00:35:01.518 "flush": true, 00:35:01.518 "reset": true, 00:35:01.518 "nvme_admin": false, 00:35:01.518 "nvme_io": false, 00:35:01.518 "nvme_io_md": false, 00:35:01.518 "write_zeroes": true, 00:35:01.518 "zcopy": true, 00:35:01.518 "get_zone_info": false, 00:35:01.518 "zone_management": false, 00:35:01.518 "zone_append": false, 00:35:01.518 "compare": false, 00:35:01.518 "compare_and_write": false, 00:35:01.518 "abort": true, 00:35:01.518 "seek_hole": false, 00:35:01.518 "seek_data": false, 00:35:01.518 "copy": true, 00:35:01.518 "nvme_iov_md": false 00:35:01.518 }, 00:35:01.518 "memory_domains": [ 00:35:01.518 { 00:35:01.518 "dma_device_id": "system", 00:35:01.518 "dma_device_type": 1 00:35:01.518 }, 00:35:01.518 { 00:35:01.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:01.518 "dma_device_type": 2 00:35:01.518 } 00:35:01.518 ], 00:35:01.518 "driver_specific": {} 00:35:01.518 } 00:35:01.518 ] 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.518 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.779 BaseBdev3 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.779 [ 00:35:01.779 { 00:35:01.779 "name": "BaseBdev3", 00:35:01.779 "aliases": [ 00:35:01.779 "1a36c142-2604-4e12-9591-49745c1a85f9" 00:35:01.779 ], 00:35:01.779 "product_name": "Malloc disk", 00:35:01.779 "block_size": 512, 00:35:01.779 "num_blocks": 65536, 00:35:01.779 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:01.779 "assigned_rate_limits": { 00:35:01.779 "rw_ios_per_sec": 0, 00:35:01.779 "rw_mbytes_per_sec": 0, 00:35:01.779 "r_mbytes_per_sec": 0, 00:35:01.779 "w_mbytes_per_sec": 0 00:35:01.779 }, 00:35:01.779 "claimed": false, 00:35:01.779 "zoned": false, 00:35:01.779 "supported_io_types": { 00:35:01.779 "read": true, 00:35:01.779 "write": true, 00:35:01.779 "unmap": true, 00:35:01.779 "flush": true, 00:35:01.779 "reset": true, 00:35:01.779 "nvme_admin": false, 00:35:01.779 "nvme_io": false, 00:35:01.779 "nvme_io_md": false, 00:35:01.779 "write_zeroes": true, 00:35:01.779 "zcopy": true, 00:35:01.779 "get_zone_info": false, 00:35:01.779 "zone_management": false, 00:35:01.779 "zone_append": false, 00:35:01.779 "compare": false, 00:35:01.779 "compare_and_write": false, 00:35:01.779 "abort": true, 00:35:01.779 "seek_hole": false, 00:35:01.779 "seek_data": false, 00:35:01.779 "copy": true, 00:35:01.779 "nvme_iov_md": false 00:35:01.779 }, 00:35:01.779 "memory_domains": [ 00:35:01.779 { 00:35:01.779 "dma_device_id": "system", 00:35:01.779 "dma_device_type": 1 00:35:01.779 }, 00:35:01.779 { 00:35:01.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:01.779 "dma_device_type": 2 00:35:01.779 } 00:35:01.779 ], 00:35:01.779 "driver_specific": {} 00:35:01.779 } 00:35:01.779 ] 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.779 BaseBdev4 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:01.779 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.780 [ 00:35:01.780 { 00:35:01.780 "name": "BaseBdev4", 00:35:01.780 "aliases": [ 00:35:01.780 "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce" 00:35:01.780 ], 00:35:01.780 "product_name": "Malloc disk", 00:35:01.780 "block_size": 512, 00:35:01.780 "num_blocks": 65536, 00:35:01.780 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:01.780 "assigned_rate_limits": { 00:35:01.780 "rw_ios_per_sec": 0, 00:35:01.780 "rw_mbytes_per_sec": 0, 00:35:01.780 "r_mbytes_per_sec": 0, 00:35:01.780 "w_mbytes_per_sec": 0 00:35:01.780 }, 00:35:01.780 "claimed": false, 00:35:01.780 "zoned": false, 00:35:01.780 "supported_io_types": { 00:35:01.780 "read": true, 00:35:01.780 "write": true, 00:35:01.780 "unmap": true, 00:35:01.780 "flush": true, 00:35:01.780 "reset": true, 00:35:01.780 "nvme_admin": false, 00:35:01.780 "nvme_io": false, 00:35:01.780 "nvme_io_md": false, 00:35:01.780 "write_zeroes": true, 00:35:01.780 "zcopy": true, 00:35:01.780 "get_zone_info": false, 00:35:01.780 "zone_management": false, 00:35:01.780 "zone_append": false, 00:35:01.780 "compare": false, 00:35:01.780 "compare_and_write": false, 00:35:01.780 "abort": true, 00:35:01.780 "seek_hole": false, 00:35:01.780 "seek_data": false, 00:35:01.780 "copy": true, 00:35:01.780 "nvme_iov_md": false 00:35:01.780 }, 00:35:01.780 "memory_domains": [ 00:35:01.780 { 00:35:01.780 "dma_device_id": "system", 00:35:01.780 "dma_device_type": 1 00:35:01.780 }, 00:35:01.780 { 00:35:01.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:01.780 "dma_device_type": 2 00:35:01.780 } 00:35:01.780 ], 00:35:01.780 "driver_specific": {} 00:35:01.780 } 00:35:01.780 ] 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.780 [2024-12-06 18:33:32.633358] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:01.780 [2024-12-06 18:33:32.633516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:01.780 [2024-12-06 18:33:32.633616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:01.780 [2024-12-06 18:33:32.636066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:01.780 [2024-12-06 18:33:32.636251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:01.780 "name": "Existed_Raid", 00:35:01.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.780 "strip_size_kb": 64, 00:35:01.780 "state": "configuring", 00:35:01.780 "raid_level": "raid5f", 00:35:01.780 "superblock": false, 00:35:01.780 "num_base_bdevs": 4, 00:35:01.780 "num_base_bdevs_discovered": 3, 00:35:01.780 "num_base_bdevs_operational": 4, 00:35:01.780 "base_bdevs_list": [ 00:35:01.780 { 00:35:01.780 "name": "BaseBdev1", 00:35:01.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:01.780 "is_configured": false, 00:35:01.780 "data_offset": 0, 00:35:01.780 "data_size": 0 00:35:01.780 }, 00:35:01.780 { 00:35:01.780 "name": "BaseBdev2", 00:35:01.780 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:01.780 "is_configured": true, 00:35:01.780 "data_offset": 0, 00:35:01.780 "data_size": 65536 00:35:01.780 }, 00:35:01.780 { 00:35:01.780 "name": "BaseBdev3", 00:35:01.780 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:01.780 "is_configured": true, 00:35:01.780 "data_offset": 0, 00:35:01.780 "data_size": 65536 00:35:01.780 }, 00:35:01.780 { 00:35:01.780 "name": "BaseBdev4", 00:35:01.780 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:01.780 "is_configured": true, 00:35:01.780 "data_offset": 0, 00:35:01.780 "data_size": 65536 00:35:01.780 } 00:35:01.780 ] 00:35:01.780 }' 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:01.780 18:33:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.350 [2024-12-06 18:33:33.033260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:02.350 "name": "Existed_Raid", 00:35:02.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.350 "strip_size_kb": 64, 00:35:02.350 "state": "configuring", 00:35:02.350 "raid_level": "raid5f", 00:35:02.350 "superblock": false, 00:35:02.350 "num_base_bdevs": 4, 00:35:02.350 "num_base_bdevs_discovered": 2, 00:35:02.350 "num_base_bdevs_operational": 4, 00:35:02.350 "base_bdevs_list": [ 00:35:02.350 { 00:35:02.350 "name": "BaseBdev1", 00:35:02.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.350 "is_configured": false, 00:35:02.350 "data_offset": 0, 00:35:02.350 "data_size": 0 00:35:02.350 }, 00:35:02.350 { 00:35:02.350 "name": null, 00:35:02.350 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:02.350 "is_configured": false, 00:35:02.350 "data_offset": 0, 00:35:02.350 "data_size": 65536 00:35:02.350 }, 00:35:02.350 { 00:35:02.350 "name": "BaseBdev3", 00:35:02.350 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:02.350 "is_configured": true, 00:35:02.350 "data_offset": 0, 00:35:02.350 "data_size": 65536 00:35:02.350 }, 00:35:02.350 { 00:35:02.350 "name": "BaseBdev4", 00:35:02.350 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:02.350 "is_configured": true, 00:35:02.350 "data_offset": 0, 00:35:02.350 "data_size": 65536 00:35:02.350 } 00:35:02.350 ] 00:35:02.350 }' 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:02.350 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.610 [2024-12-06 18:33:33.518013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:02.610 BaseBdev1 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.610 [ 00:35:02.610 { 00:35:02.610 "name": "BaseBdev1", 00:35:02.610 "aliases": [ 00:35:02.610 "b59d7ef2-e367-4224-9707-1e9eeb624cc7" 00:35:02.610 ], 00:35:02.610 "product_name": "Malloc disk", 00:35:02.610 "block_size": 512, 00:35:02.610 "num_blocks": 65536, 00:35:02.610 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:02.610 "assigned_rate_limits": { 00:35:02.610 "rw_ios_per_sec": 0, 00:35:02.610 "rw_mbytes_per_sec": 0, 00:35:02.610 "r_mbytes_per_sec": 0, 00:35:02.610 "w_mbytes_per_sec": 0 00:35:02.610 }, 00:35:02.610 "claimed": true, 00:35:02.610 "claim_type": "exclusive_write", 00:35:02.610 "zoned": false, 00:35:02.610 "supported_io_types": { 00:35:02.610 "read": true, 00:35:02.610 "write": true, 00:35:02.610 "unmap": true, 00:35:02.610 "flush": true, 00:35:02.610 "reset": true, 00:35:02.610 "nvme_admin": false, 00:35:02.610 "nvme_io": false, 00:35:02.610 "nvme_io_md": false, 00:35:02.610 "write_zeroes": true, 00:35:02.610 "zcopy": true, 00:35:02.610 "get_zone_info": false, 00:35:02.610 "zone_management": false, 00:35:02.610 "zone_append": false, 00:35:02.610 "compare": false, 00:35:02.610 "compare_and_write": false, 00:35:02.610 "abort": true, 00:35:02.610 "seek_hole": false, 00:35:02.610 "seek_data": false, 00:35:02.610 "copy": true, 00:35:02.610 "nvme_iov_md": false 00:35:02.610 }, 00:35:02.610 "memory_domains": [ 00:35:02.610 { 00:35:02.610 "dma_device_id": "system", 00:35:02.610 "dma_device_type": 1 00:35:02.610 }, 00:35:02.610 { 00:35:02.610 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:02.610 "dma_device_type": 2 00:35:02.610 } 00:35:02.610 ], 00:35:02.610 "driver_specific": {} 00:35:02.610 } 00:35:02.610 ] 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:02.610 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.611 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:02.870 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.870 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:02.870 "name": "Existed_Raid", 00:35:02.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:02.870 "strip_size_kb": 64, 00:35:02.870 "state": "configuring", 00:35:02.870 "raid_level": "raid5f", 00:35:02.870 "superblock": false, 00:35:02.870 "num_base_bdevs": 4, 00:35:02.870 "num_base_bdevs_discovered": 3, 00:35:02.870 "num_base_bdevs_operational": 4, 00:35:02.870 "base_bdevs_list": [ 00:35:02.870 { 00:35:02.870 "name": "BaseBdev1", 00:35:02.870 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:02.870 "is_configured": true, 00:35:02.870 "data_offset": 0, 00:35:02.870 "data_size": 65536 00:35:02.870 }, 00:35:02.870 { 00:35:02.870 "name": null, 00:35:02.870 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:02.870 "is_configured": false, 00:35:02.870 "data_offset": 0, 00:35:02.870 "data_size": 65536 00:35:02.870 }, 00:35:02.870 { 00:35:02.870 "name": "BaseBdev3", 00:35:02.870 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:02.870 "is_configured": true, 00:35:02.870 "data_offset": 0, 00:35:02.870 "data_size": 65536 00:35:02.870 }, 00:35:02.870 { 00:35:02.870 "name": "BaseBdev4", 00:35:02.870 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:02.870 "is_configured": true, 00:35:02.870 "data_offset": 0, 00:35:02.870 "data_size": 65536 00:35:02.870 } 00:35:02.870 ] 00:35:02.870 }' 00:35:02.870 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:02.870 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.130 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:03.130 18:33:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.130 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.130 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.130 18:33:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.130 [2024-12-06 18:33:34.009379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:03.130 "name": "Existed_Raid", 00:35:03.130 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:03.130 "strip_size_kb": 64, 00:35:03.130 "state": "configuring", 00:35:03.130 "raid_level": "raid5f", 00:35:03.130 "superblock": false, 00:35:03.130 "num_base_bdevs": 4, 00:35:03.130 "num_base_bdevs_discovered": 2, 00:35:03.130 "num_base_bdevs_operational": 4, 00:35:03.130 "base_bdevs_list": [ 00:35:03.130 { 00:35:03.130 "name": "BaseBdev1", 00:35:03.130 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:03.130 "is_configured": true, 00:35:03.130 "data_offset": 0, 00:35:03.130 "data_size": 65536 00:35:03.130 }, 00:35:03.130 { 00:35:03.130 "name": null, 00:35:03.130 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:03.130 "is_configured": false, 00:35:03.130 "data_offset": 0, 00:35:03.130 "data_size": 65536 00:35:03.130 }, 00:35:03.130 { 00:35:03.130 "name": null, 00:35:03.130 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:03.130 "is_configured": false, 00:35:03.130 "data_offset": 0, 00:35:03.130 "data_size": 65536 00:35:03.130 }, 00:35:03.130 { 00:35:03.130 "name": "BaseBdev4", 00:35:03.130 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:03.130 "is_configured": true, 00:35:03.130 "data_offset": 0, 00:35:03.130 "data_size": 65536 00:35:03.130 } 00:35:03.130 ] 00:35:03.130 }' 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:03.130 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.699 [2024-12-06 18:33:34.473287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:03.699 "name": "Existed_Raid", 00:35:03.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:03.699 "strip_size_kb": 64, 00:35:03.699 "state": "configuring", 00:35:03.699 "raid_level": "raid5f", 00:35:03.699 "superblock": false, 00:35:03.699 "num_base_bdevs": 4, 00:35:03.699 "num_base_bdevs_discovered": 3, 00:35:03.699 "num_base_bdevs_operational": 4, 00:35:03.699 "base_bdevs_list": [ 00:35:03.699 { 00:35:03.699 "name": "BaseBdev1", 00:35:03.699 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:03.699 "is_configured": true, 00:35:03.699 "data_offset": 0, 00:35:03.699 "data_size": 65536 00:35:03.699 }, 00:35:03.699 { 00:35:03.699 "name": null, 00:35:03.699 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:03.699 "is_configured": false, 00:35:03.699 "data_offset": 0, 00:35:03.699 "data_size": 65536 00:35:03.699 }, 00:35:03.699 { 00:35:03.699 "name": "BaseBdev3", 00:35:03.699 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:03.699 "is_configured": true, 00:35:03.699 "data_offset": 0, 00:35:03.699 "data_size": 65536 00:35:03.699 }, 00:35:03.699 { 00:35:03.699 "name": "BaseBdev4", 00:35:03.699 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:03.699 "is_configured": true, 00:35:03.699 "data_offset": 0, 00:35:03.699 "data_size": 65536 00:35:03.699 } 00:35:03.699 ] 00:35:03.699 }' 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:03.699 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.957 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:03.957 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.957 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:03.957 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:04.215 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.215 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:04.215 18:33:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:04.215 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.215 18:33:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.215 [2024-12-06 18:33:34.937336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:04.215 "name": "Existed_Raid", 00:35:04.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.215 "strip_size_kb": 64, 00:35:04.215 "state": "configuring", 00:35:04.215 "raid_level": "raid5f", 00:35:04.215 "superblock": false, 00:35:04.215 "num_base_bdevs": 4, 00:35:04.215 "num_base_bdevs_discovered": 2, 00:35:04.215 "num_base_bdevs_operational": 4, 00:35:04.215 "base_bdevs_list": [ 00:35:04.215 { 00:35:04.215 "name": null, 00:35:04.215 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:04.215 "is_configured": false, 00:35:04.215 "data_offset": 0, 00:35:04.215 "data_size": 65536 00:35:04.215 }, 00:35:04.215 { 00:35:04.215 "name": null, 00:35:04.215 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:04.215 "is_configured": false, 00:35:04.215 "data_offset": 0, 00:35:04.215 "data_size": 65536 00:35:04.215 }, 00:35:04.215 { 00:35:04.215 "name": "BaseBdev3", 00:35:04.215 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:04.215 "is_configured": true, 00:35:04.215 "data_offset": 0, 00:35:04.215 "data_size": 65536 00:35:04.215 }, 00:35:04.215 { 00:35:04.215 "name": "BaseBdev4", 00:35:04.215 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:04.215 "is_configured": true, 00:35:04.215 "data_offset": 0, 00:35:04.215 "data_size": 65536 00:35:04.215 } 00:35:04.215 ] 00:35:04.215 }' 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:04.215 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.473 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.473 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:04.473 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.473 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.732 [2024-12-06 18:33:35.454325] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:04.732 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:04.733 "name": "Existed_Raid", 00:35:04.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:04.733 "strip_size_kb": 64, 00:35:04.733 "state": "configuring", 00:35:04.733 "raid_level": "raid5f", 00:35:04.733 "superblock": false, 00:35:04.733 "num_base_bdevs": 4, 00:35:04.733 "num_base_bdevs_discovered": 3, 00:35:04.733 "num_base_bdevs_operational": 4, 00:35:04.733 "base_bdevs_list": [ 00:35:04.733 { 00:35:04.733 "name": null, 00:35:04.733 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:04.733 "is_configured": false, 00:35:04.733 "data_offset": 0, 00:35:04.733 "data_size": 65536 00:35:04.733 }, 00:35:04.733 { 00:35:04.733 "name": "BaseBdev2", 00:35:04.733 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:04.733 "is_configured": true, 00:35:04.733 "data_offset": 0, 00:35:04.733 "data_size": 65536 00:35:04.733 }, 00:35:04.733 { 00:35:04.733 "name": "BaseBdev3", 00:35:04.733 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:04.733 "is_configured": true, 00:35:04.733 "data_offset": 0, 00:35:04.733 "data_size": 65536 00:35:04.733 }, 00:35:04.733 { 00:35:04.733 "name": "BaseBdev4", 00:35:04.733 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:04.733 "is_configured": true, 00:35:04.733 "data_offset": 0, 00:35:04.733 "data_size": 65536 00:35:04.733 } 00:35:04.733 ] 00:35:04.733 }' 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:04.733 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:04.991 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 18:33:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b59d7ef2-e367-4224-9707-1e9eeb624cc7 00:35:05.250 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.250 18:33:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.250 [2024-12-06 18:33:35.999278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:05.250 [2024-12-06 18:33:35.999477] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:05.250 [2024-12-06 18:33:35.999499] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:35:05.250 [2024-12-06 18:33:35.999835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:05.250 [2024-12-06 18:33:36.007223] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:05.250 [2024-12-06 18:33:36.007252] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:05.250 [2024-12-06 18:33:36.007542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:05.250 NewBaseBdev 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.250 [ 00:35:05.250 { 00:35:05.250 "name": "NewBaseBdev", 00:35:05.250 "aliases": [ 00:35:05.250 "b59d7ef2-e367-4224-9707-1e9eeb624cc7" 00:35:05.250 ], 00:35:05.250 "product_name": "Malloc disk", 00:35:05.250 "block_size": 512, 00:35:05.250 "num_blocks": 65536, 00:35:05.250 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:05.250 "assigned_rate_limits": { 00:35:05.250 "rw_ios_per_sec": 0, 00:35:05.250 "rw_mbytes_per_sec": 0, 00:35:05.250 "r_mbytes_per_sec": 0, 00:35:05.250 "w_mbytes_per_sec": 0 00:35:05.250 }, 00:35:05.250 "claimed": true, 00:35:05.250 "claim_type": "exclusive_write", 00:35:05.250 "zoned": false, 00:35:05.250 "supported_io_types": { 00:35:05.250 "read": true, 00:35:05.250 "write": true, 00:35:05.250 "unmap": true, 00:35:05.250 "flush": true, 00:35:05.250 "reset": true, 00:35:05.250 "nvme_admin": false, 00:35:05.250 "nvme_io": false, 00:35:05.250 "nvme_io_md": false, 00:35:05.250 "write_zeroes": true, 00:35:05.250 "zcopy": true, 00:35:05.250 "get_zone_info": false, 00:35:05.250 "zone_management": false, 00:35:05.250 "zone_append": false, 00:35:05.250 "compare": false, 00:35:05.250 "compare_and_write": false, 00:35:05.250 "abort": true, 00:35:05.250 "seek_hole": false, 00:35:05.250 "seek_data": false, 00:35:05.250 "copy": true, 00:35:05.250 "nvme_iov_md": false 00:35:05.250 }, 00:35:05.250 "memory_domains": [ 00:35:05.250 { 00:35:05.250 "dma_device_id": "system", 00:35:05.250 "dma_device_type": 1 00:35:05.250 }, 00:35:05.250 { 00:35:05.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:05.250 "dma_device_type": 2 00:35:05.250 } 00:35:05.250 ], 00:35:05.250 "driver_specific": {} 00:35:05.250 } 00:35:05.250 ] 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:05.250 "name": "Existed_Raid", 00:35:05.250 "uuid": "96802642-3ee6-4c99-aa12-dcf41725e1d1", 00:35:05.250 "strip_size_kb": 64, 00:35:05.250 "state": "online", 00:35:05.250 "raid_level": "raid5f", 00:35:05.250 "superblock": false, 00:35:05.250 "num_base_bdevs": 4, 00:35:05.250 "num_base_bdevs_discovered": 4, 00:35:05.250 "num_base_bdevs_operational": 4, 00:35:05.250 "base_bdevs_list": [ 00:35:05.250 { 00:35:05.250 "name": "NewBaseBdev", 00:35:05.250 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:05.250 "is_configured": true, 00:35:05.250 "data_offset": 0, 00:35:05.250 "data_size": 65536 00:35:05.250 }, 00:35:05.250 { 00:35:05.250 "name": "BaseBdev2", 00:35:05.250 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:05.250 "is_configured": true, 00:35:05.250 "data_offset": 0, 00:35:05.250 "data_size": 65536 00:35:05.250 }, 00:35:05.250 { 00:35:05.250 "name": "BaseBdev3", 00:35:05.250 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:05.250 "is_configured": true, 00:35:05.250 "data_offset": 0, 00:35:05.250 "data_size": 65536 00:35:05.250 }, 00:35:05.250 { 00:35:05.250 "name": "BaseBdev4", 00:35:05.250 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:05.250 "is_configured": true, 00:35:05.250 "data_offset": 0, 00:35:05.250 "data_size": 65536 00:35:05.250 } 00:35:05.250 ] 00:35:05.250 }' 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:05.250 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.523 [2024-12-06 18:33:36.428592] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.523 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:05.523 "name": "Existed_Raid", 00:35:05.523 "aliases": [ 00:35:05.523 "96802642-3ee6-4c99-aa12-dcf41725e1d1" 00:35:05.523 ], 00:35:05.523 "product_name": "Raid Volume", 00:35:05.523 "block_size": 512, 00:35:05.523 "num_blocks": 196608, 00:35:05.523 "uuid": "96802642-3ee6-4c99-aa12-dcf41725e1d1", 00:35:05.523 "assigned_rate_limits": { 00:35:05.523 "rw_ios_per_sec": 0, 00:35:05.523 "rw_mbytes_per_sec": 0, 00:35:05.523 "r_mbytes_per_sec": 0, 00:35:05.523 "w_mbytes_per_sec": 0 00:35:05.523 }, 00:35:05.523 "claimed": false, 00:35:05.523 "zoned": false, 00:35:05.523 "supported_io_types": { 00:35:05.523 "read": true, 00:35:05.523 "write": true, 00:35:05.523 "unmap": false, 00:35:05.523 "flush": false, 00:35:05.523 "reset": true, 00:35:05.523 "nvme_admin": false, 00:35:05.523 "nvme_io": false, 00:35:05.523 "nvme_io_md": false, 00:35:05.523 "write_zeroes": true, 00:35:05.523 "zcopy": false, 00:35:05.523 "get_zone_info": false, 00:35:05.523 "zone_management": false, 00:35:05.523 "zone_append": false, 00:35:05.523 "compare": false, 00:35:05.523 "compare_and_write": false, 00:35:05.523 "abort": false, 00:35:05.523 "seek_hole": false, 00:35:05.523 "seek_data": false, 00:35:05.523 "copy": false, 00:35:05.523 "nvme_iov_md": false 00:35:05.523 }, 00:35:05.523 "driver_specific": { 00:35:05.523 "raid": { 00:35:05.523 "uuid": "96802642-3ee6-4c99-aa12-dcf41725e1d1", 00:35:05.523 "strip_size_kb": 64, 00:35:05.523 "state": "online", 00:35:05.523 "raid_level": "raid5f", 00:35:05.523 "superblock": false, 00:35:05.523 "num_base_bdevs": 4, 00:35:05.523 "num_base_bdevs_discovered": 4, 00:35:05.523 "num_base_bdevs_operational": 4, 00:35:05.523 "base_bdevs_list": [ 00:35:05.523 { 00:35:05.523 "name": "NewBaseBdev", 00:35:05.523 "uuid": "b59d7ef2-e367-4224-9707-1e9eeb624cc7", 00:35:05.523 "is_configured": true, 00:35:05.523 "data_offset": 0, 00:35:05.523 "data_size": 65536 00:35:05.523 }, 00:35:05.523 { 00:35:05.523 "name": "BaseBdev2", 00:35:05.523 "uuid": "18b83546-d484-4337-a8b3-258bb24a6472", 00:35:05.524 "is_configured": true, 00:35:05.524 "data_offset": 0, 00:35:05.524 "data_size": 65536 00:35:05.524 }, 00:35:05.524 { 00:35:05.524 "name": "BaseBdev3", 00:35:05.524 "uuid": "1a36c142-2604-4e12-9591-49745c1a85f9", 00:35:05.524 "is_configured": true, 00:35:05.524 "data_offset": 0, 00:35:05.524 "data_size": 65536 00:35:05.524 }, 00:35:05.524 { 00:35:05.524 "name": "BaseBdev4", 00:35:05.524 "uuid": "ea641da4-83f7-4fa6-b5c3-f07d4414f2ce", 00:35:05.524 "is_configured": true, 00:35:05.524 "data_offset": 0, 00:35:05.524 "data_size": 65536 00:35:05.524 } 00:35:05.524 ] 00:35:05.524 } 00:35:05.524 } 00:35:05.524 }' 00:35:05.524 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:05.782 BaseBdev2 00:35:05.782 BaseBdev3 00:35:05.782 BaseBdev4' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:05.782 [2024-12-06 18:33:36.696254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:05.782 [2024-12-06 18:33:36.696286] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:05.782 [2024-12-06 18:33:36.696370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:05.782 [2024-12-06 18:33:36.696729] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:05.782 [2024-12-06 18:33:36.696746] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82492 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82492 ']' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82492 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:05.782 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82492 00:35:06.040 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:06.040 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:06.040 killing process with pid 82492 00:35:06.040 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82492' 00:35:06.040 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82492 00:35:06.040 [2024-12-06 18:33:36.742841] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:06.040 18:33:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82492 00:35:06.299 [2024-12-06 18:33:37.176839] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:35:07.675 00:35:07.675 real 0m11.029s 00:35:07.675 user 0m17.036s 00:35:07.675 sys 0m2.412s 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:35:07.675 ************************************ 00:35:07.675 END TEST raid5f_state_function_test 00:35:07.675 ************************************ 00:35:07.675 18:33:38 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:35:07.675 18:33:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:35:07.675 18:33:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.675 18:33:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:07.675 ************************************ 00:35:07.675 START TEST raid5f_state_function_test_sb 00:35:07.675 ************************************ 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:35:07.675 Process raid pid: 83154 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83154 00:35:07.675 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:35:07.676 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83154' 00:35:07.676 18:33:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83154 00:35:07.676 18:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83154 ']' 00:35:07.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:07.676 18:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:07.676 18:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:07.676 18:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:07.676 18:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:07.676 18:33:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:07.676 [2024-12-06 18:33:38.616049] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:07.676 [2024-12-06 18:33:38.616210] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:07.935 [2024-12-06 18:33:38.799961] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:08.194 [2024-12-06 18:33:38.937611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:08.454 [2024-12-06 18:33:39.195850] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:08.454 [2024-12-06 18:33:39.195900] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.713 [2024-12-06 18:33:39.459818] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:08.713 [2024-12-06 18:33:39.459889] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:08.713 [2024-12-06 18:33:39.459901] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:08.713 [2024-12-06 18:33:39.459915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:08.713 [2024-12-06 18:33:39.459923] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:08.713 [2024-12-06 18:33:39.459936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:08.713 [2024-12-06 18:33:39.459944] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:08.713 [2024-12-06 18:33:39.459956] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:08.713 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:08.714 "name": "Existed_Raid", 00:35:08.714 "uuid": "1cfdbae4-9109-4c07-8272-e879b89c7926", 00:35:08.714 "strip_size_kb": 64, 00:35:08.714 "state": "configuring", 00:35:08.714 "raid_level": "raid5f", 00:35:08.714 "superblock": true, 00:35:08.714 "num_base_bdevs": 4, 00:35:08.714 "num_base_bdevs_discovered": 0, 00:35:08.714 "num_base_bdevs_operational": 4, 00:35:08.714 "base_bdevs_list": [ 00:35:08.714 { 00:35:08.714 "name": "BaseBdev1", 00:35:08.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.714 "is_configured": false, 00:35:08.714 "data_offset": 0, 00:35:08.714 "data_size": 0 00:35:08.714 }, 00:35:08.714 { 00:35:08.714 "name": "BaseBdev2", 00:35:08.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.714 "is_configured": false, 00:35:08.714 "data_offset": 0, 00:35:08.714 "data_size": 0 00:35:08.714 }, 00:35:08.714 { 00:35:08.714 "name": "BaseBdev3", 00:35:08.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.714 "is_configured": false, 00:35:08.714 "data_offset": 0, 00:35:08.714 "data_size": 0 00:35:08.714 }, 00:35:08.714 { 00:35:08.714 "name": "BaseBdev4", 00:35:08.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:08.714 "is_configured": false, 00:35:08.714 "data_offset": 0, 00:35:08.714 "data_size": 0 00:35:08.714 } 00:35:08.714 ] 00:35:08.714 }' 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:08.714 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.974 [2024-12-06 18:33:39.891333] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:08.974 [2024-12-06 18:33:39.891379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:08.974 [2024-12-06 18:33:39.903337] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:08.974 [2024-12-06 18:33:39.903398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:08.974 [2024-12-06 18:33:39.903410] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:08.974 [2024-12-06 18:33:39.903425] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:08.974 [2024-12-06 18:33:39.903432] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:08.974 [2024-12-06 18:33:39.903445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:08.974 [2024-12-06 18:33:39.903452] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:08.974 [2024-12-06 18:33:39.903465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.974 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.234 [2024-12-06 18:33:39.959328] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:09.234 BaseBdev1 00:35:09.234 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.235 18:33:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.235 [ 00:35:09.235 { 00:35:09.235 "name": "BaseBdev1", 00:35:09.235 "aliases": [ 00:35:09.235 "a30517b8-285e-42c8-881e-6430854b554a" 00:35:09.235 ], 00:35:09.235 "product_name": "Malloc disk", 00:35:09.235 "block_size": 512, 00:35:09.235 "num_blocks": 65536, 00:35:09.235 "uuid": "a30517b8-285e-42c8-881e-6430854b554a", 00:35:09.235 "assigned_rate_limits": { 00:35:09.235 "rw_ios_per_sec": 0, 00:35:09.235 "rw_mbytes_per_sec": 0, 00:35:09.235 "r_mbytes_per_sec": 0, 00:35:09.235 "w_mbytes_per_sec": 0 00:35:09.235 }, 00:35:09.235 "claimed": true, 00:35:09.235 "claim_type": "exclusive_write", 00:35:09.235 "zoned": false, 00:35:09.235 "supported_io_types": { 00:35:09.235 "read": true, 00:35:09.235 "write": true, 00:35:09.235 "unmap": true, 00:35:09.235 "flush": true, 00:35:09.235 "reset": true, 00:35:09.235 "nvme_admin": false, 00:35:09.235 "nvme_io": false, 00:35:09.235 "nvme_io_md": false, 00:35:09.235 "write_zeroes": true, 00:35:09.235 "zcopy": true, 00:35:09.235 "get_zone_info": false, 00:35:09.235 "zone_management": false, 00:35:09.235 "zone_append": false, 00:35:09.235 "compare": false, 00:35:09.235 "compare_and_write": false, 00:35:09.235 "abort": true, 00:35:09.235 "seek_hole": false, 00:35:09.235 "seek_data": false, 00:35:09.235 "copy": true, 00:35:09.235 "nvme_iov_md": false 00:35:09.235 }, 00:35:09.235 "memory_domains": [ 00:35:09.235 { 00:35:09.235 "dma_device_id": "system", 00:35:09.235 "dma_device_type": 1 00:35:09.235 }, 00:35:09.235 { 00:35:09.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:09.235 "dma_device_type": 2 00:35:09.235 } 00:35:09.235 ], 00:35:09.235 "driver_specific": {} 00:35:09.235 } 00:35:09.235 ] 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:09.235 "name": "Existed_Raid", 00:35:09.235 "uuid": "7dc873ba-fd5a-4401-873d-82592e2bf174", 00:35:09.235 "strip_size_kb": 64, 00:35:09.235 "state": "configuring", 00:35:09.235 "raid_level": "raid5f", 00:35:09.235 "superblock": true, 00:35:09.235 "num_base_bdevs": 4, 00:35:09.235 "num_base_bdevs_discovered": 1, 00:35:09.235 "num_base_bdevs_operational": 4, 00:35:09.235 "base_bdevs_list": [ 00:35:09.235 { 00:35:09.235 "name": "BaseBdev1", 00:35:09.235 "uuid": "a30517b8-285e-42c8-881e-6430854b554a", 00:35:09.235 "is_configured": true, 00:35:09.235 "data_offset": 2048, 00:35:09.235 "data_size": 63488 00:35:09.235 }, 00:35:09.235 { 00:35:09.235 "name": "BaseBdev2", 00:35:09.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.235 "is_configured": false, 00:35:09.235 "data_offset": 0, 00:35:09.235 "data_size": 0 00:35:09.235 }, 00:35:09.235 { 00:35:09.235 "name": "BaseBdev3", 00:35:09.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.235 "is_configured": false, 00:35:09.235 "data_offset": 0, 00:35:09.235 "data_size": 0 00:35:09.235 }, 00:35:09.235 { 00:35:09.235 "name": "BaseBdev4", 00:35:09.235 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.235 "is_configured": false, 00:35:09.235 "data_offset": 0, 00:35:09.235 "data_size": 0 00:35:09.235 } 00:35:09.235 ] 00:35:09.235 }' 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:09.235 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.495 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:09.495 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.495 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.495 [2024-12-06 18:33:40.438711] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:09.495 [2024-12-06 18:33:40.438893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:35:09.495 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.755 [2024-12-06 18:33:40.450803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:09.755 [2024-12-06 18:33:40.453290] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:35:09.755 [2024-12-06 18:33:40.453434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:35:09.755 [2024-12-06 18:33:40.453558] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:35:09.755 [2024-12-06 18:33:40.453609] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:35:09.755 [2024-12-06 18:33:40.453639] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:35:09.755 [2024-12-06 18:33:40.453672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:09.755 "name": "Existed_Raid", 00:35:09.755 "uuid": "45dbcb74-a1a3-4949-bd22-be1b3cdec881", 00:35:09.755 "strip_size_kb": 64, 00:35:09.755 "state": "configuring", 00:35:09.755 "raid_level": "raid5f", 00:35:09.755 "superblock": true, 00:35:09.755 "num_base_bdevs": 4, 00:35:09.755 "num_base_bdevs_discovered": 1, 00:35:09.755 "num_base_bdevs_operational": 4, 00:35:09.755 "base_bdevs_list": [ 00:35:09.755 { 00:35:09.755 "name": "BaseBdev1", 00:35:09.755 "uuid": "a30517b8-285e-42c8-881e-6430854b554a", 00:35:09.755 "is_configured": true, 00:35:09.755 "data_offset": 2048, 00:35:09.755 "data_size": 63488 00:35:09.755 }, 00:35:09.755 { 00:35:09.755 "name": "BaseBdev2", 00:35:09.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.755 "is_configured": false, 00:35:09.755 "data_offset": 0, 00:35:09.755 "data_size": 0 00:35:09.755 }, 00:35:09.755 { 00:35:09.755 "name": "BaseBdev3", 00:35:09.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.755 "is_configured": false, 00:35:09.755 "data_offset": 0, 00:35:09.755 "data_size": 0 00:35:09.755 }, 00:35:09.755 { 00:35:09.755 "name": "BaseBdev4", 00:35:09.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:09.755 "is_configured": false, 00:35:09.755 "data_offset": 0, 00:35:09.755 "data_size": 0 00:35:09.755 } 00:35:09.755 ] 00:35:09.755 }' 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:09.755 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.016 [2024-12-06 18:33:40.912234] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:10.016 BaseBdev2 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.016 [ 00:35:10.016 { 00:35:10.016 "name": "BaseBdev2", 00:35:10.016 "aliases": [ 00:35:10.016 "c41ac9ea-e6cc-4e75-aa16-bdad48c5e1a9" 00:35:10.016 ], 00:35:10.016 "product_name": "Malloc disk", 00:35:10.016 "block_size": 512, 00:35:10.016 "num_blocks": 65536, 00:35:10.016 "uuid": "c41ac9ea-e6cc-4e75-aa16-bdad48c5e1a9", 00:35:10.016 "assigned_rate_limits": { 00:35:10.016 "rw_ios_per_sec": 0, 00:35:10.016 "rw_mbytes_per_sec": 0, 00:35:10.016 "r_mbytes_per_sec": 0, 00:35:10.016 "w_mbytes_per_sec": 0 00:35:10.016 }, 00:35:10.016 "claimed": true, 00:35:10.016 "claim_type": "exclusive_write", 00:35:10.016 "zoned": false, 00:35:10.016 "supported_io_types": { 00:35:10.016 "read": true, 00:35:10.016 "write": true, 00:35:10.016 "unmap": true, 00:35:10.016 "flush": true, 00:35:10.016 "reset": true, 00:35:10.016 "nvme_admin": false, 00:35:10.016 "nvme_io": false, 00:35:10.016 "nvme_io_md": false, 00:35:10.016 "write_zeroes": true, 00:35:10.016 "zcopy": true, 00:35:10.016 "get_zone_info": false, 00:35:10.016 "zone_management": false, 00:35:10.016 "zone_append": false, 00:35:10.016 "compare": false, 00:35:10.016 "compare_and_write": false, 00:35:10.016 "abort": true, 00:35:10.016 "seek_hole": false, 00:35:10.016 "seek_data": false, 00:35:10.016 "copy": true, 00:35:10.016 "nvme_iov_md": false 00:35:10.016 }, 00:35:10.016 "memory_domains": [ 00:35:10.016 { 00:35:10.016 "dma_device_id": "system", 00:35:10.016 "dma_device_type": 1 00:35:10.016 }, 00:35:10.016 { 00:35:10.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.016 "dma_device_type": 2 00:35:10.016 } 00:35:10.016 ], 00:35:10.016 "driver_specific": {} 00:35:10.016 } 00:35:10.016 ] 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:10.016 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:10.276 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.276 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:10.276 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.276 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.276 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.276 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:10.276 "name": "Existed_Raid", 00:35:10.276 "uuid": "45dbcb74-a1a3-4949-bd22-be1b3cdec881", 00:35:10.276 "strip_size_kb": 64, 00:35:10.276 "state": "configuring", 00:35:10.276 "raid_level": "raid5f", 00:35:10.276 "superblock": true, 00:35:10.276 "num_base_bdevs": 4, 00:35:10.276 "num_base_bdevs_discovered": 2, 00:35:10.276 "num_base_bdevs_operational": 4, 00:35:10.277 "base_bdevs_list": [ 00:35:10.277 { 00:35:10.277 "name": "BaseBdev1", 00:35:10.277 "uuid": "a30517b8-285e-42c8-881e-6430854b554a", 00:35:10.277 "is_configured": true, 00:35:10.277 "data_offset": 2048, 00:35:10.277 "data_size": 63488 00:35:10.277 }, 00:35:10.277 { 00:35:10.277 "name": "BaseBdev2", 00:35:10.277 "uuid": "c41ac9ea-e6cc-4e75-aa16-bdad48c5e1a9", 00:35:10.277 "is_configured": true, 00:35:10.277 "data_offset": 2048, 00:35:10.277 "data_size": 63488 00:35:10.277 }, 00:35:10.277 { 00:35:10.277 "name": "BaseBdev3", 00:35:10.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.277 "is_configured": false, 00:35:10.277 "data_offset": 0, 00:35:10.277 "data_size": 0 00:35:10.277 }, 00:35:10.277 { 00:35:10.277 "name": "BaseBdev4", 00:35:10.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.277 "is_configured": false, 00:35:10.277 "data_offset": 0, 00:35:10.277 "data_size": 0 00:35:10.277 } 00:35:10.277 ] 00:35:10.277 }' 00:35:10.277 18:33:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:10.277 18:33:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.538 [2024-12-06 18:33:41.410581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:10.538 BaseBdev3 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.538 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.538 [ 00:35:10.538 { 00:35:10.538 "name": "BaseBdev3", 00:35:10.538 "aliases": [ 00:35:10.538 "1f6a6132-95c9-41e9-a757-88a2ccba3b65" 00:35:10.538 ], 00:35:10.538 "product_name": "Malloc disk", 00:35:10.538 "block_size": 512, 00:35:10.538 "num_blocks": 65536, 00:35:10.538 "uuid": "1f6a6132-95c9-41e9-a757-88a2ccba3b65", 00:35:10.538 "assigned_rate_limits": { 00:35:10.538 "rw_ios_per_sec": 0, 00:35:10.538 "rw_mbytes_per_sec": 0, 00:35:10.538 "r_mbytes_per_sec": 0, 00:35:10.538 "w_mbytes_per_sec": 0 00:35:10.539 }, 00:35:10.539 "claimed": true, 00:35:10.539 "claim_type": "exclusive_write", 00:35:10.539 "zoned": false, 00:35:10.539 "supported_io_types": { 00:35:10.539 "read": true, 00:35:10.539 "write": true, 00:35:10.539 "unmap": true, 00:35:10.539 "flush": true, 00:35:10.539 "reset": true, 00:35:10.539 "nvme_admin": false, 00:35:10.539 "nvme_io": false, 00:35:10.539 "nvme_io_md": false, 00:35:10.539 "write_zeroes": true, 00:35:10.539 "zcopy": true, 00:35:10.539 "get_zone_info": false, 00:35:10.539 "zone_management": false, 00:35:10.539 "zone_append": false, 00:35:10.539 "compare": false, 00:35:10.539 "compare_and_write": false, 00:35:10.539 "abort": true, 00:35:10.539 "seek_hole": false, 00:35:10.539 "seek_data": false, 00:35:10.539 "copy": true, 00:35:10.539 "nvme_iov_md": false 00:35:10.539 }, 00:35:10.539 "memory_domains": [ 00:35:10.539 { 00:35:10.539 "dma_device_id": "system", 00:35:10.539 "dma_device_type": 1 00:35:10.539 }, 00:35:10.539 { 00:35:10.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:10.539 "dma_device_type": 2 00:35:10.539 } 00:35:10.539 ], 00:35:10.539 "driver_specific": {} 00:35:10.539 } 00:35:10.539 ] 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:10.539 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.856 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:10.856 "name": "Existed_Raid", 00:35:10.856 "uuid": "45dbcb74-a1a3-4949-bd22-be1b3cdec881", 00:35:10.856 "strip_size_kb": 64, 00:35:10.856 "state": "configuring", 00:35:10.856 "raid_level": "raid5f", 00:35:10.856 "superblock": true, 00:35:10.856 "num_base_bdevs": 4, 00:35:10.856 "num_base_bdevs_discovered": 3, 00:35:10.856 "num_base_bdevs_operational": 4, 00:35:10.856 "base_bdevs_list": [ 00:35:10.856 { 00:35:10.856 "name": "BaseBdev1", 00:35:10.856 "uuid": "a30517b8-285e-42c8-881e-6430854b554a", 00:35:10.856 "is_configured": true, 00:35:10.856 "data_offset": 2048, 00:35:10.856 "data_size": 63488 00:35:10.856 }, 00:35:10.856 { 00:35:10.856 "name": "BaseBdev2", 00:35:10.856 "uuid": "c41ac9ea-e6cc-4e75-aa16-bdad48c5e1a9", 00:35:10.856 "is_configured": true, 00:35:10.856 "data_offset": 2048, 00:35:10.856 "data_size": 63488 00:35:10.856 }, 00:35:10.856 { 00:35:10.856 "name": "BaseBdev3", 00:35:10.856 "uuid": "1f6a6132-95c9-41e9-a757-88a2ccba3b65", 00:35:10.856 "is_configured": true, 00:35:10.856 "data_offset": 2048, 00:35:10.856 "data_size": 63488 00:35:10.856 }, 00:35:10.856 { 00:35:10.856 "name": "BaseBdev4", 00:35:10.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:10.856 "is_configured": false, 00:35:10.856 "data_offset": 0, 00:35:10.856 "data_size": 0 00:35:10.856 } 00:35:10.856 ] 00:35:10.856 }' 00:35:10.856 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:10.856 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.156 [2024-12-06 18:33:41.880982] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:11.156 [2024-12-06 18:33:41.881336] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:11.156 [2024-12-06 18:33:41.881356] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:11.156 [2024-12-06 18:33:41.881675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:11.156 BaseBdev4 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.156 [2024-12-06 18:33:41.889489] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:11.156 [2024-12-06 18:33:41.889516] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:35:11.156 [2024-12-06 18:33:41.889782] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.156 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.156 [ 00:35:11.156 { 00:35:11.156 "name": "BaseBdev4", 00:35:11.156 "aliases": [ 00:35:11.156 "2eb94efd-287c-4328-b068-e8e351658edd" 00:35:11.156 ], 00:35:11.157 "product_name": "Malloc disk", 00:35:11.157 "block_size": 512, 00:35:11.157 "num_blocks": 65536, 00:35:11.157 "uuid": "2eb94efd-287c-4328-b068-e8e351658edd", 00:35:11.157 "assigned_rate_limits": { 00:35:11.157 "rw_ios_per_sec": 0, 00:35:11.157 "rw_mbytes_per_sec": 0, 00:35:11.157 "r_mbytes_per_sec": 0, 00:35:11.157 "w_mbytes_per_sec": 0 00:35:11.157 }, 00:35:11.157 "claimed": true, 00:35:11.157 "claim_type": "exclusive_write", 00:35:11.157 "zoned": false, 00:35:11.157 "supported_io_types": { 00:35:11.157 "read": true, 00:35:11.157 "write": true, 00:35:11.157 "unmap": true, 00:35:11.157 "flush": true, 00:35:11.157 "reset": true, 00:35:11.157 "nvme_admin": false, 00:35:11.157 "nvme_io": false, 00:35:11.157 "nvme_io_md": false, 00:35:11.157 "write_zeroes": true, 00:35:11.157 "zcopy": true, 00:35:11.157 "get_zone_info": false, 00:35:11.157 "zone_management": false, 00:35:11.157 "zone_append": false, 00:35:11.157 "compare": false, 00:35:11.157 "compare_and_write": false, 00:35:11.157 "abort": true, 00:35:11.157 "seek_hole": false, 00:35:11.157 "seek_data": false, 00:35:11.157 "copy": true, 00:35:11.157 "nvme_iov_md": false 00:35:11.157 }, 00:35:11.157 "memory_domains": [ 00:35:11.157 { 00:35:11.157 "dma_device_id": "system", 00:35:11.157 "dma_device_type": 1 00:35:11.157 }, 00:35:11.157 { 00:35:11.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:11.157 "dma_device_type": 2 00:35:11.157 } 00:35:11.157 ], 00:35:11.157 "driver_specific": {} 00:35:11.157 } 00:35:11.157 ] 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:11.157 "name": "Existed_Raid", 00:35:11.157 "uuid": "45dbcb74-a1a3-4949-bd22-be1b3cdec881", 00:35:11.157 "strip_size_kb": 64, 00:35:11.157 "state": "online", 00:35:11.157 "raid_level": "raid5f", 00:35:11.157 "superblock": true, 00:35:11.157 "num_base_bdevs": 4, 00:35:11.157 "num_base_bdevs_discovered": 4, 00:35:11.157 "num_base_bdevs_operational": 4, 00:35:11.157 "base_bdevs_list": [ 00:35:11.157 { 00:35:11.157 "name": "BaseBdev1", 00:35:11.157 "uuid": "a30517b8-285e-42c8-881e-6430854b554a", 00:35:11.157 "is_configured": true, 00:35:11.157 "data_offset": 2048, 00:35:11.157 "data_size": 63488 00:35:11.157 }, 00:35:11.157 { 00:35:11.157 "name": "BaseBdev2", 00:35:11.157 "uuid": "c41ac9ea-e6cc-4e75-aa16-bdad48c5e1a9", 00:35:11.157 "is_configured": true, 00:35:11.157 "data_offset": 2048, 00:35:11.157 "data_size": 63488 00:35:11.157 }, 00:35:11.157 { 00:35:11.157 "name": "BaseBdev3", 00:35:11.157 "uuid": "1f6a6132-95c9-41e9-a757-88a2ccba3b65", 00:35:11.157 "is_configured": true, 00:35:11.157 "data_offset": 2048, 00:35:11.157 "data_size": 63488 00:35:11.157 }, 00:35:11.157 { 00:35:11.157 "name": "BaseBdev4", 00:35:11.157 "uuid": "2eb94efd-287c-4328-b068-e8e351658edd", 00:35:11.157 "is_configured": true, 00:35:11.157 "data_offset": 2048, 00:35:11.157 "data_size": 63488 00:35:11.157 } 00:35:11.157 ] 00:35:11.157 }' 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:11.157 18:33:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.420 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:11.420 [2024-12-06 18:33:42.358771] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:11.680 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.680 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:11.680 "name": "Existed_Raid", 00:35:11.680 "aliases": [ 00:35:11.680 "45dbcb74-a1a3-4949-bd22-be1b3cdec881" 00:35:11.680 ], 00:35:11.680 "product_name": "Raid Volume", 00:35:11.680 "block_size": 512, 00:35:11.680 "num_blocks": 190464, 00:35:11.680 "uuid": "45dbcb74-a1a3-4949-bd22-be1b3cdec881", 00:35:11.680 "assigned_rate_limits": { 00:35:11.680 "rw_ios_per_sec": 0, 00:35:11.680 "rw_mbytes_per_sec": 0, 00:35:11.680 "r_mbytes_per_sec": 0, 00:35:11.680 "w_mbytes_per_sec": 0 00:35:11.680 }, 00:35:11.680 "claimed": false, 00:35:11.680 "zoned": false, 00:35:11.680 "supported_io_types": { 00:35:11.680 "read": true, 00:35:11.680 "write": true, 00:35:11.680 "unmap": false, 00:35:11.680 "flush": false, 00:35:11.680 "reset": true, 00:35:11.680 "nvme_admin": false, 00:35:11.680 "nvme_io": false, 00:35:11.680 "nvme_io_md": false, 00:35:11.680 "write_zeroes": true, 00:35:11.680 "zcopy": false, 00:35:11.680 "get_zone_info": false, 00:35:11.680 "zone_management": false, 00:35:11.680 "zone_append": false, 00:35:11.680 "compare": false, 00:35:11.680 "compare_and_write": false, 00:35:11.680 "abort": false, 00:35:11.680 "seek_hole": false, 00:35:11.680 "seek_data": false, 00:35:11.680 "copy": false, 00:35:11.680 "nvme_iov_md": false 00:35:11.680 }, 00:35:11.680 "driver_specific": { 00:35:11.680 "raid": { 00:35:11.680 "uuid": "45dbcb74-a1a3-4949-bd22-be1b3cdec881", 00:35:11.680 "strip_size_kb": 64, 00:35:11.680 "state": "online", 00:35:11.680 "raid_level": "raid5f", 00:35:11.680 "superblock": true, 00:35:11.680 "num_base_bdevs": 4, 00:35:11.680 "num_base_bdevs_discovered": 4, 00:35:11.680 "num_base_bdevs_operational": 4, 00:35:11.680 "base_bdevs_list": [ 00:35:11.680 { 00:35:11.680 "name": "BaseBdev1", 00:35:11.680 "uuid": "a30517b8-285e-42c8-881e-6430854b554a", 00:35:11.680 "is_configured": true, 00:35:11.680 "data_offset": 2048, 00:35:11.680 "data_size": 63488 00:35:11.680 }, 00:35:11.680 { 00:35:11.680 "name": "BaseBdev2", 00:35:11.680 "uuid": "c41ac9ea-e6cc-4e75-aa16-bdad48c5e1a9", 00:35:11.680 "is_configured": true, 00:35:11.680 "data_offset": 2048, 00:35:11.680 "data_size": 63488 00:35:11.680 }, 00:35:11.680 { 00:35:11.680 "name": "BaseBdev3", 00:35:11.680 "uuid": "1f6a6132-95c9-41e9-a757-88a2ccba3b65", 00:35:11.680 "is_configured": true, 00:35:11.680 "data_offset": 2048, 00:35:11.680 "data_size": 63488 00:35:11.680 }, 00:35:11.680 { 00:35:11.680 "name": "BaseBdev4", 00:35:11.680 "uuid": "2eb94efd-287c-4328-b068-e8e351658edd", 00:35:11.680 "is_configured": true, 00:35:11.680 "data_offset": 2048, 00:35:11.680 "data_size": 63488 00:35:11.680 } 00:35:11.680 ] 00:35:11.680 } 00:35:11.680 } 00:35:11.680 }' 00:35:11.680 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:35:11.681 BaseBdev2 00:35:11.681 BaseBdev3 00:35:11.681 BaseBdev4' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.681 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.941 [2024-12-06 18:33:42.638333] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:11.941 "name": "Existed_Raid", 00:35:11.941 "uuid": "45dbcb74-a1a3-4949-bd22-be1b3cdec881", 00:35:11.941 "strip_size_kb": 64, 00:35:11.941 "state": "online", 00:35:11.941 "raid_level": "raid5f", 00:35:11.941 "superblock": true, 00:35:11.941 "num_base_bdevs": 4, 00:35:11.941 "num_base_bdevs_discovered": 3, 00:35:11.941 "num_base_bdevs_operational": 3, 00:35:11.941 "base_bdevs_list": [ 00:35:11.941 { 00:35:11.941 "name": null, 00:35:11.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:11.941 "is_configured": false, 00:35:11.941 "data_offset": 0, 00:35:11.941 "data_size": 63488 00:35:11.941 }, 00:35:11.941 { 00:35:11.941 "name": "BaseBdev2", 00:35:11.941 "uuid": "c41ac9ea-e6cc-4e75-aa16-bdad48c5e1a9", 00:35:11.941 "is_configured": true, 00:35:11.941 "data_offset": 2048, 00:35:11.941 "data_size": 63488 00:35:11.941 }, 00:35:11.941 { 00:35:11.941 "name": "BaseBdev3", 00:35:11.941 "uuid": "1f6a6132-95c9-41e9-a757-88a2ccba3b65", 00:35:11.941 "is_configured": true, 00:35:11.941 "data_offset": 2048, 00:35:11.941 "data_size": 63488 00:35:11.941 }, 00:35:11.941 { 00:35:11.941 "name": "BaseBdev4", 00:35:11.941 "uuid": "2eb94efd-287c-4328-b068-e8e351658edd", 00:35:11.941 "is_configured": true, 00:35:11.941 "data_offset": 2048, 00:35:11.941 "data_size": 63488 00:35:11.941 } 00:35:11.941 ] 00:35:11.941 }' 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:11.941 18:33:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.510 [2024-12-06 18:33:43.209328] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:12.510 [2024-12-06 18:33:43.209528] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:12.510 [2024-12-06 18:33:43.315913] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.510 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.510 [2024-12-06 18:33:43.371857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:12.770 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.771 [2024-12-06 18:33:43.535398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:35:12.771 [2024-12-06 18:33:43.535460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.771 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.031 BaseBdev2 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.031 [ 00:35:13.031 { 00:35:13.031 "name": "BaseBdev2", 00:35:13.031 "aliases": [ 00:35:13.031 "821ccc67-314e-4f69-aa78-5d9915498151" 00:35:13.031 ], 00:35:13.031 "product_name": "Malloc disk", 00:35:13.031 "block_size": 512, 00:35:13.031 "num_blocks": 65536, 00:35:13.031 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:13.031 "assigned_rate_limits": { 00:35:13.031 "rw_ios_per_sec": 0, 00:35:13.031 "rw_mbytes_per_sec": 0, 00:35:13.031 "r_mbytes_per_sec": 0, 00:35:13.031 "w_mbytes_per_sec": 0 00:35:13.031 }, 00:35:13.031 "claimed": false, 00:35:13.031 "zoned": false, 00:35:13.031 "supported_io_types": { 00:35:13.031 "read": true, 00:35:13.031 "write": true, 00:35:13.031 "unmap": true, 00:35:13.031 "flush": true, 00:35:13.031 "reset": true, 00:35:13.031 "nvme_admin": false, 00:35:13.031 "nvme_io": false, 00:35:13.031 "nvme_io_md": false, 00:35:13.031 "write_zeroes": true, 00:35:13.031 "zcopy": true, 00:35:13.031 "get_zone_info": false, 00:35:13.031 "zone_management": false, 00:35:13.031 "zone_append": false, 00:35:13.031 "compare": false, 00:35:13.031 "compare_and_write": false, 00:35:13.031 "abort": true, 00:35:13.031 "seek_hole": false, 00:35:13.031 "seek_data": false, 00:35:13.031 "copy": true, 00:35:13.031 "nvme_iov_md": false 00:35:13.031 }, 00:35:13.031 "memory_domains": [ 00:35:13.031 { 00:35:13.031 "dma_device_id": "system", 00:35:13.031 "dma_device_type": 1 00:35:13.031 }, 00:35:13.031 { 00:35:13.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:13.031 "dma_device_type": 2 00:35:13.031 } 00:35:13.031 ], 00:35:13.031 "driver_specific": {} 00:35:13.031 } 00:35:13.031 ] 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.031 BaseBdev3 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:35:13.031 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.032 [ 00:35:13.032 { 00:35:13.032 "name": "BaseBdev3", 00:35:13.032 "aliases": [ 00:35:13.032 "399082bb-97f9-4952-ad90-35007a4c4c4e" 00:35:13.032 ], 00:35:13.032 "product_name": "Malloc disk", 00:35:13.032 "block_size": 512, 00:35:13.032 "num_blocks": 65536, 00:35:13.032 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:13.032 "assigned_rate_limits": { 00:35:13.032 "rw_ios_per_sec": 0, 00:35:13.032 "rw_mbytes_per_sec": 0, 00:35:13.032 "r_mbytes_per_sec": 0, 00:35:13.032 "w_mbytes_per_sec": 0 00:35:13.032 }, 00:35:13.032 "claimed": false, 00:35:13.032 "zoned": false, 00:35:13.032 "supported_io_types": { 00:35:13.032 "read": true, 00:35:13.032 "write": true, 00:35:13.032 "unmap": true, 00:35:13.032 "flush": true, 00:35:13.032 "reset": true, 00:35:13.032 "nvme_admin": false, 00:35:13.032 "nvme_io": false, 00:35:13.032 "nvme_io_md": false, 00:35:13.032 "write_zeroes": true, 00:35:13.032 "zcopy": true, 00:35:13.032 "get_zone_info": false, 00:35:13.032 "zone_management": false, 00:35:13.032 "zone_append": false, 00:35:13.032 "compare": false, 00:35:13.032 "compare_and_write": false, 00:35:13.032 "abort": true, 00:35:13.032 "seek_hole": false, 00:35:13.032 "seek_data": false, 00:35:13.032 "copy": true, 00:35:13.032 "nvme_iov_md": false 00:35:13.032 }, 00:35:13.032 "memory_domains": [ 00:35:13.032 { 00:35:13.032 "dma_device_id": "system", 00:35:13.032 "dma_device_type": 1 00:35:13.032 }, 00:35:13.032 { 00:35:13.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:13.032 "dma_device_type": 2 00:35:13.032 } 00:35:13.032 ], 00:35:13.032 "driver_specific": {} 00:35:13.032 } 00:35:13.032 ] 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.032 BaseBdev4 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.032 [ 00:35:13.032 { 00:35:13.032 "name": "BaseBdev4", 00:35:13.032 "aliases": [ 00:35:13.032 "f595fc20-45dc-4ee0-a8a8-dd18628b3b06" 00:35:13.032 ], 00:35:13.032 "product_name": "Malloc disk", 00:35:13.032 "block_size": 512, 00:35:13.032 "num_blocks": 65536, 00:35:13.032 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:13.032 "assigned_rate_limits": { 00:35:13.032 "rw_ios_per_sec": 0, 00:35:13.032 "rw_mbytes_per_sec": 0, 00:35:13.032 "r_mbytes_per_sec": 0, 00:35:13.032 "w_mbytes_per_sec": 0 00:35:13.032 }, 00:35:13.032 "claimed": false, 00:35:13.032 "zoned": false, 00:35:13.032 "supported_io_types": { 00:35:13.032 "read": true, 00:35:13.032 "write": true, 00:35:13.032 "unmap": true, 00:35:13.032 "flush": true, 00:35:13.032 "reset": true, 00:35:13.032 "nvme_admin": false, 00:35:13.032 "nvme_io": false, 00:35:13.032 "nvme_io_md": false, 00:35:13.032 "write_zeroes": true, 00:35:13.032 "zcopy": true, 00:35:13.032 "get_zone_info": false, 00:35:13.032 "zone_management": false, 00:35:13.032 "zone_append": false, 00:35:13.032 "compare": false, 00:35:13.032 "compare_and_write": false, 00:35:13.032 "abort": true, 00:35:13.032 "seek_hole": false, 00:35:13.032 "seek_data": false, 00:35:13.032 "copy": true, 00:35:13.032 "nvme_iov_md": false 00:35:13.032 }, 00:35:13.032 "memory_domains": [ 00:35:13.032 { 00:35:13.032 "dma_device_id": "system", 00:35:13.032 "dma_device_type": 1 00:35:13.032 }, 00:35:13.032 { 00:35:13.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:13.032 "dma_device_type": 2 00:35:13.032 } 00:35:13.032 ], 00:35:13.032 "driver_specific": {} 00:35:13.032 } 00:35:13.032 ] 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.032 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.292 [2024-12-06 18:33:43.980176] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:35:13.292 [2024-12-06 18:33:43.980229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:35:13.292 [2024-12-06 18:33:43.980257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:13.292 [2024-12-06 18:33:43.982627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:13.292 [2024-12-06 18:33:43.982849] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:13.292 18:33:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.292 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.292 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:13.292 "name": "Existed_Raid", 00:35:13.292 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:13.292 "strip_size_kb": 64, 00:35:13.292 "state": "configuring", 00:35:13.292 "raid_level": "raid5f", 00:35:13.292 "superblock": true, 00:35:13.292 "num_base_bdevs": 4, 00:35:13.292 "num_base_bdevs_discovered": 3, 00:35:13.292 "num_base_bdevs_operational": 4, 00:35:13.292 "base_bdevs_list": [ 00:35:13.292 { 00:35:13.292 "name": "BaseBdev1", 00:35:13.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.292 "is_configured": false, 00:35:13.292 "data_offset": 0, 00:35:13.292 "data_size": 0 00:35:13.292 }, 00:35:13.292 { 00:35:13.292 "name": "BaseBdev2", 00:35:13.292 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:13.292 "is_configured": true, 00:35:13.292 "data_offset": 2048, 00:35:13.292 "data_size": 63488 00:35:13.292 }, 00:35:13.292 { 00:35:13.292 "name": "BaseBdev3", 00:35:13.292 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:13.292 "is_configured": true, 00:35:13.292 "data_offset": 2048, 00:35:13.292 "data_size": 63488 00:35:13.292 }, 00:35:13.292 { 00:35:13.292 "name": "BaseBdev4", 00:35:13.292 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:13.292 "is_configured": true, 00:35:13.292 "data_offset": 2048, 00:35:13.292 "data_size": 63488 00:35:13.292 } 00:35:13.292 ] 00:35:13.292 }' 00:35:13.292 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:13.292 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.551 [2024-12-06 18:33:44.383535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:13.551 "name": "Existed_Raid", 00:35:13.551 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:13.551 "strip_size_kb": 64, 00:35:13.551 "state": "configuring", 00:35:13.551 "raid_level": "raid5f", 00:35:13.551 "superblock": true, 00:35:13.551 "num_base_bdevs": 4, 00:35:13.551 "num_base_bdevs_discovered": 2, 00:35:13.551 "num_base_bdevs_operational": 4, 00:35:13.551 "base_bdevs_list": [ 00:35:13.551 { 00:35:13.551 "name": "BaseBdev1", 00:35:13.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:13.551 "is_configured": false, 00:35:13.551 "data_offset": 0, 00:35:13.551 "data_size": 0 00:35:13.551 }, 00:35:13.551 { 00:35:13.551 "name": null, 00:35:13.551 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:13.551 "is_configured": false, 00:35:13.551 "data_offset": 0, 00:35:13.551 "data_size": 63488 00:35:13.551 }, 00:35:13.551 { 00:35:13.551 "name": "BaseBdev3", 00:35:13.551 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:13.551 "is_configured": true, 00:35:13.551 "data_offset": 2048, 00:35:13.551 "data_size": 63488 00:35:13.551 }, 00:35:13.551 { 00:35:13.551 "name": "BaseBdev4", 00:35:13.551 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:13.551 "is_configured": true, 00:35:13.551 "data_offset": 2048, 00:35:13.551 "data_size": 63488 00:35:13.551 } 00:35:13.551 ] 00:35:13.551 }' 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:13.551 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.120 [2024-12-06 18:33:44.863715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:14.120 BaseBdev1 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.120 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.120 [ 00:35:14.120 { 00:35:14.120 "name": "BaseBdev1", 00:35:14.120 "aliases": [ 00:35:14.120 "81c6430c-3ec8-4c55-8b62-03c9f6581f6a" 00:35:14.120 ], 00:35:14.120 "product_name": "Malloc disk", 00:35:14.120 "block_size": 512, 00:35:14.120 "num_blocks": 65536, 00:35:14.120 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:14.120 "assigned_rate_limits": { 00:35:14.120 "rw_ios_per_sec": 0, 00:35:14.120 "rw_mbytes_per_sec": 0, 00:35:14.120 "r_mbytes_per_sec": 0, 00:35:14.120 "w_mbytes_per_sec": 0 00:35:14.120 }, 00:35:14.120 "claimed": true, 00:35:14.120 "claim_type": "exclusive_write", 00:35:14.120 "zoned": false, 00:35:14.120 "supported_io_types": { 00:35:14.120 "read": true, 00:35:14.120 "write": true, 00:35:14.120 "unmap": true, 00:35:14.120 "flush": true, 00:35:14.120 "reset": true, 00:35:14.120 "nvme_admin": false, 00:35:14.120 "nvme_io": false, 00:35:14.120 "nvme_io_md": false, 00:35:14.120 "write_zeroes": true, 00:35:14.120 "zcopy": true, 00:35:14.120 "get_zone_info": false, 00:35:14.120 "zone_management": false, 00:35:14.120 "zone_append": false, 00:35:14.120 "compare": false, 00:35:14.120 "compare_and_write": false, 00:35:14.120 "abort": true, 00:35:14.120 "seek_hole": false, 00:35:14.120 "seek_data": false, 00:35:14.120 "copy": true, 00:35:14.120 "nvme_iov_md": false 00:35:14.120 }, 00:35:14.120 "memory_domains": [ 00:35:14.120 { 00:35:14.120 "dma_device_id": "system", 00:35:14.120 "dma_device_type": 1 00:35:14.120 }, 00:35:14.120 { 00:35:14.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:14.121 "dma_device_type": 2 00:35:14.121 } 00:35:14.121 ], 00:35:14.121 "driver_specific": {} 00:35:14.121 } 00:35:14.121 ] 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:14.121 "name": "Existed_Raid", 00:35:14.121 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:14.121 "strip_size_kb": 64, 00:35:14.121 "state": "configuring", 00:35:14.121 "raid_level": "raid5f", 00:35:14.121 "superblock": true, 00:35:14.121 "num_base_bdevs": 4, 00:35:14.121 "num_base_bdevs_discovered": 3, 00:35:14.121 "num_base_bdevs_operational": 4, 00:35:14.121 "base_bdevs_list": [ 00:35:14.121 { 00:35:14.121 "name": "BaseBdev1", 00:35:14.121 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:14.121 "is_configured": true, 00:35:14.121 "data_offset": 2048, 00:35:14.121 "data_size": 63488 00:35:14.121 }, 00:35:14.121 { 00:35:14.121 "name": null, 00:35:14.121 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:14.121 "is_configured": false, 00:35:14.121 "data_offset": 0, 00:35:14.121 "data_size": 63488 00:35:14.121 }, 00:35:14.121 { 00:35:14.121 "name": "BaseBdev3", 00:35:14.121 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:14.121 "is_configured": true, 00:35:14.121 "data_offset": 2048, 00:35:14.121 "data_size": 63488 00:35:14.121 }, 00:35:14.121 { 00:35:14.121 "name": "BaseBdev4", 00:35:14.121 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:14.121 "is_configured": true, 00:35:14.121 "data_offset": 2048, 00:35:14.121 "data_size": 63488 00:35:14.121 } 00:35:14.121 ] 00:35:14.121 }' 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:14.121 18:33:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.394 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.395 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:14.395 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.395 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.656 [2024-12-06 18:33:45.371298] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:14.656 "name": "Existed_Raid", 00:35:14.656 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:14.656 "strip_size_kb": 64, 00:35:14.656 "state": "configuring", 00:35:14.656 "raid_level": "raid5f", 00:35:14.656 "superblock": true, 00:35:14.656 "num_base_bdevs": 4, 00:35:14.656 "num_base_bdevs_discovered": 2, 00:35:14.656 "num_base_bdevs_operational": 4, 00:35:14.656 "base_bdevs_list": [ 00:35:14.656 { 00:35:14.656 "name": "BaseBdev1", 00:35:14.656 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:14.656 "is_configured": true, 00:35:14.656 "data_offset": 2048, 00:35:14.656 "data_size": 63488 00:35:14.656 }, 00:35:14.656 { 00:35:14.656 "name": null, 00:35:14.656 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:14.656 "is_configured": false, 00:35:14.656 "data_offset": 0, 00:35:14.656 "data_size": 63488 00:35:14.656 }, 00:35:14.656 { 00:35:14.656 "name": null, 00:35:14.656 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:14.656 "is_configured": false, 00:35:14.656 "data_offset": 0, 00:35:14.656 "data_size": 63488 00:35:14.656 }, 00:35:14.656 { 00:35:14.656 "name": "BaseBdev4", 00:35:14.656 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:14.656 "is_configured": true, 00:35:14.656 "data_offset": 2048, 00:35:14.656 "data_size": 63488 00:35:14.656 } 00:35:14.656 ] 00:35:14.656 }' 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:14.656 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.914 [2024-12-06 18:33:45.787655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.914 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:14.914 "name": "Existed_Raid", 00:35:14.914 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:14.914 "strip_size_kb": 64, 00:35:14.915 "state": "configuring", 00:35:14.915 "raid_level": "raid5f", 00:35:14.915 "superblock": true, 00:35:14.915 "num_base_bdevs": 4, 00:35:14.915 "num_base_bdevs_discovered": 3, 00:35:14.915 "num_base_bdevs_operational": 4, 00:35:14.915 "base_bdevs_list": [ 00:35:14.915 { 00:35:14.915 "name": "BaseBdev1", 00:35:14.915 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:14.915 "is_configured": true, 00:35:14.915 "data_offset": 2048, 00:35:14.915 "data_size": 63488 00:35:14.915 }, 00:35:14.915 { 00:35:14.915 "name": null, 00:35:14.915 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:14.915 "is_configured": false, 00:35:14.915 "data_offset": 0, 00:35:14.915 "data_size": 63488 00:35:14.915 }, 00:35:14.915 { 00:35:14.915 "name": "BaseBdev3", 00:35:14.915 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:14.915 "is_configured": true, 00:35:14.915 "data_offset": 2048, 00:35:14.915 "data_size": 63488 00:35:14.915 }, 00:35:14.915 { 00:35:14.915 "name": "BaseBdev4", 00:35:14.915 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:14.915 "is_configured": true, 00:35:14.915 "data_offset": 2048, 00:35:14.915 "data_size": 63488 00:35:14.915 } 00:35:14.915 ] 00:35:14.915 }' 00:35:14.915 18:33:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:14.915 18:33:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.498 [2024-12-06 18:33:46.247321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:15.498 "name": "Existed_Raid", 00:35:15.498 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:15.498 "strip_size_kb": 64, 00:35:15.498 "state": "configuring", 00:35:15.498 "raid_level": "raid5f", 00:35:15.498 "superblock": true, 00:35:15.498 "num_base_bdevs": 4, 00:35:15.498 "num_base_bdevs_discovered": 2, 00:35:15.498 "num_base_bdevs_operational": 4, 00:35:15.498 "base_bdevs_list": [ 00:35:15.498 { 00:35:15.498 "name": null, 00:35:15.498 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:15.498 "is_configured": false, 00:35:15.498 "data_offset": 0, 00:35:15.498 "data_size": 63488 00:35:15.498 }, 00:35:15.498 { 00:35:15.498 "name": null, 00:35:15.498 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:15.498 "is_configured": false, 00:35:15.498 "data_offset": 0, 00:35:15.498 "data_size": 63488 00:35:15.498 }, 00:35:15.498 { 00:35:15.498 "name": "BaseBdev3", 00:35:15.498 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:15.498 "is_configured": true, 00:35:15.498 "data_offset": 2048, 00:35:15.498 "data_size": 63488 00:35:15.498 }, 00:35:15.498 { 00:35:15.498 "name": "BaseBdev4", 00:35:15.498 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:15.498 "is_configured": true, 00:35:15.498 "data_offset": 2048, 00:35:15.498 "data_size": 63488 00:35:15.498 } 00:35:15.498 ] 00:35:15.498 }' 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:15.498 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.068 [2024-12-06 18:33:46.828607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:16.068 "name": "Existed_Raid", 00:35:16.068 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:16.068 "strip_size_kb": 64, 00:35:16.068 "state": "configuring", 00:35:16.068 "raid_level": "raid5f", 00:35:16.068 "superblock": true, 00:35:16.068 "num_base_bdevs": 4, 00:35:16.068 "num_base_bdevs_discovered": 3, 00:35:16.068 "num_base_bdevs_operational": 4, 00:35:16.068 "base_bdevs_list": [ 00:35:16.068 { 00:35:16.068 "name": null, 00:35:16.068 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:16.068 "is_configured": false, 00:35:16.068 "data_offset": 0, 00:35:16.068 "data_size": 63488 00:35:16.068 }, 00:35:16.068 { 00:35:16.068 "name": "BaseBdev2", 00:35:16.068 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:16.068 "is_configured": true, 00:35:16.068 "data_offset": 2048, 00:35:16.068 "data_size": 63488 00:35:16.068 }, 00:35:16.068 { 00:35:16.068 "name": "BaseBdev3", 00:35:16.068 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:16.068 "is_configured": true, 00:35:16.068 "data_offset": 2048, 00:35:16.068 "data_size": 63488 00:35:16.068 }, 00:35:16.068 { 00:35:16.068 "name": "BaseBdev4", 00:35:16.068 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:16.068 "is_configured": true, 00:35:16.068 "data_offset": 2048, 00:35:16.068 "data_size": 63488 00:35:16.068 } 00:35:16.068 ] 00:35:16.068 }' 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:16.068 18:33:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.327 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 81c6430c-3ec8-4c55-8b62-03c9f6581f6a 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.587 [2024-12-06 18:33:47.340088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:35:16.587 [2024-12-06 18:33:47.340402] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:16.587 [2024-12-06 18:33:47.340419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:16.587 [2024-12-06 18:33:47.340733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:35:16.587 NewBaseBdev 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.587 [2024-12-06 18:33:47.348416] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:16.587 [2024-12-06 18:33:47.348557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:35:16.587 [2024-12-06 18:33:47.348992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.587 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.587 [ 00:35:16.587 { 00:35:16.587 "name": "NewBaseBdev", 00:35:16.588 "aliases": [ 00:35:16.588 "81c6430c-3ec8-4c55-8b62-03c9f6581f6a" 00:35:16.588 ], 00:35:16.588 "product_name": "Malloc disk", 00:35:16.588 "block_size": 512, 00:35:16.588 "num_blocks": 65536, 00:35:16.588 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:16.588 "assigned_rate_limits": { 00:35:16.588 "rw_ios_per_sec": 0, 00:35:16.588 "rw_mbytes_per_sec": 0, 00:35:16.588 "r_mbytes_per_sec": 0, 00:35:16.588 "w_mbytes_per_sec": 0 00:35:16.588 }, 00:35:16.588 "claimed": true, 00:35:16.588 "claim_type": "exclusive_write", 00:35:16.588 "zoned": false, 00:35:16.588 "supported_io_types": { 00:35:16.588 "read": true, 00:35:16.588 "write": true, 00:35:16.588 "unmap": true, 00:35:16.588 "flush": true, 00:35:16.588 "reset": true, 00:35:16.588 "nvme_admin": false, 00:35:16.588 "nvme_io": false, 00:35:16.588 "nvme_io_md": false, 00:35:16.588 "write_zeroes": true, 00:35:16.588 "zcopy": true, 00:35:16.588 "get_zone_info": false, 00:35:16.588 "zone_management": false, 00:35:16.588 "zone_append": false, 00:35:16.588 "compare": false, 00:35:16.588 "compare_and_write": false, 00:35:16.588 "abort": true, 00:35:16.588 "seek_hole": false, 00:35:16.588 "seek_data": false, 00:35:16.588 "copy": true, 00:35:16.588 "nvme_iov_md": false 00:35:16.588 }, 00:35:16.588 "memory_domains": [ 00:35:16.588 { 00:35:16.588 "dma_device_id": "system", 00:35:16.588 "dma_device_type": 1 00:35:16.588 }, 00:35:16.588 { 00:35:16.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:35:16.588 "dma_device_type": 2 00:35:16.588 } 00:35:16.588 ], 00:35:16.588 "driver_specific": {} 00:35:16.588 } 00:35:16.588 ] 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:16.588 "name": "Existed_Raid", 00:35:16.588 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:16.588 "strip_size_kb": 64, 00:35:16.588 "state": "online", 00:35:16.588 "raid_level": "raid5f", 00:35:16.588 "superblock": true, 00:35:16.588 "num_base_bdevs": 4, 00:35:16.588 "num_base_bdevs_discovered": 4, 00:35:16.588 "num_base_bdevs_operational": 4, 00:35:16.588 "base_bdevs_list": [ 00:35:16.588 { 00:35:16.588 "name": "NewBaseBdev", 00:35:16.588 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:16.588 "is_configured": true, 00:35:16.588 "data_offset": 2048, 00:35:16.588 "data_size": 63488 00:35:16.588 }, 00:35:16.588 { 00:35:16.588 "name": "BaseBdev2", 00:35:16.588 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:16.588 "is_configured": true, 00:35:16.588 "data_offset": 2048, 00:35:16.588 "data_size": 63488 00:35:16.588 }, 00:35:16.588 { 00:35:16.588 "name": "BaseBdev3", 00:35:16.588 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:16.588 "is_configured": true, 00:35:16.588 "data_offset": 2048, 00:35:16.588 "data_size": 63488 00:35:16.588 }, 00:35:16.588 { 00:35:16.588 "name": "BaseBdev4", 00:35:16.588 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:16.588 "is_configured": true, 00:35:16.588 "data_offset": 2048, 00:35:16.588 "data_size": 63488 00:35:16.588 } 00:35:16.588 ] 00:35:16.588 }' 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:16.588 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.847 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:16.847 [2024-12-06 18:33:47.770473] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:17.107 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.107 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:17.107 "name": "Existed_Raid", 00:35:17.107 "aliases": [ 00:35:17.107 "20baaf45-1e48-4c46-9fbd-f0025d74ac56" 00:35:17.107 ], 00:35:17.107 "product_name": "Raid Volume", 00:35:17.107 "block_size": 512, 00:35:17.107 "num_blocks": 190464, 00:35:17.107 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:17.107 "assigned_rate_limits": { 00:35:17.107 "rw_ios_per_sec": 0, 00:35:17.107 "rw_mbytes_per_sec": 0, 00:35:17.107 "r_mbytes_per_sec": 0, 00:35:17.107 "w_mbytes_per_sec": 0 00:35:17.107 }, 00:35:17.107 "claimed": false, 00:35:17.107 "zoned": false, 00:35:17.107 "supported_io_types": { 00:35:17.107 "read": true, 00:35:17.107 "write": true, 00:35:17.107 "unmap": false, 00:35:17.107 "flush": false, 00:35:17.107 "reset": true, 00:35:17.107 "nvme_admin": false, 00:35:17.107 "nvme_io": false, 00:35:17.107 "nvme_io_md": false, 00:35:17.107 "write_zeroes": true, 00:35:17.107 "zcopy": false, 00:35:17.107 "get_zone_info": false, 00:35:17.107 "zone_management": false, 00:35:17.107 "zone_append": false, 00:35:17.107 "compare": false, 00:35:17.108 "compare_and_write": false, 00:35:17.108 "abort": false, 00:35:17.108 "seek_hole": false, 00:35:17.108 "seek_data": false, 00:35:17.108 "copy": false, 00:35:17.108 "nvme_iov_md": false 00:35:17.108 }, 00:35:17.108 "driver_specific": { 00:35:17.108 "raid": { 00:35:17.108 "uuid": "20baaf45-1e48-4c46-9fbd-f0025d74ac56", 00:35:17.108 "strip_size_kb": 64, 00:35:17.108 "state": "online", 00:35:17.108 "raid_level": "raid5f", 00:35:17.108 "superblock": true, 00:35:17.108 "num_base_bdevs": 4, 00:35:17.108 "num_base_bdevs_discovered": 4, 00:35:17.108 "num_base_bdevs_operational": 4, 00:35:17.108 "base_bdevs_list": [ 00:35:17.108 { 00:35:17.108 "name": "NewBaseBdev", 00:35:17.108 "uuid": "81c6430c-3ec8-4c55-8b62-03c9f6581f6a", 00:35:17.108 "is_configured": true, 00:35:17.108 "data_offset": 2048, 00:35:17.108 "data_size": 63488 00:35:17.108 }, 00:35:17.108 { 00:35:17.108 "name": "BaseBdev2", 00:35:17.108 "uuid": "821ccc67-314e-4f69-aa78-5d9915498151", 00:35:17.108 "is_configured": true, 00:35:17.108 "data_offset": 2048, 00:35:17.108 "data_size": 63488 00:35:17.108 }, 00:35:17.108 { 00:35:17.108 "name": "BaseBdev3", 00:35:17.108 "uuid": "399082bb-97f9-4952-ad90-35007a4c4c4e", 00:35:17.108 "is_configured": true, 00:35:17.108 "data_offset": 2048, 00:35:17.108 "data_size": 63488 00:35:17.108 }, 00:35:17.108 { 00:35:17.108 "name": "BaseBdev4", 00:35:17.108 "uuid": "f595fc20-45dc-4ee0-a8a8-dd18628b3b06", 00:35:17.108 "is_configured": true, 00:35:17.108 "data_offset": 2048, 00:35:17.108 "data_size": 63488 00:35:17.108 } 00:35:17.108 ] 00:35:17.108 } 00:35:17.108 } 00:35:17.108 }' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:35:17.108 BaseBdev2 00:35:17.108 BaseBdev3 00:35:17.108 BaseBdev4' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.108 18:33:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.108 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.108 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:17.108 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:17.108 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:17.108 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:35:17.108 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.108 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.108 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:17.367 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.367 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:17.367 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:17.367 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:35:17.367 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.367 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:17.367 [2024-12-06 18:33:48.090256] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:35:17.367 [2024-12-06 18:33:48.090382] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:17.367 [2024-12-06 18:33:48.090475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:17.367 [2024-12-06 18:33:48.090830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:17.367 [2024-12-06 18:33:48.090845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:35:17.367 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83154 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83154 ']' 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83154 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83154 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:17.368 killing process with pid 83154 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83154' 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83154 00:35:17.368 [2024-12-06 18:33:48.140865] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:17.368 18:33:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83154 00:35:17.627 [2024-12-06 18:33:48.575599] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:19.008 18:33:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:35:19.008 00:35:19.008 real 0m11.296s 00:35:19.008 user 0m17.450s 00:35:19.008 sys 0m2.485s 00:35:19.008 18:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.008 18:33:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:19.008 ************************************ 00:35:19.008 END TEST raid5f_state_function_test_sb 00:35:19.008 ************************************ 00:35:19.008 18:33:49 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:35:19.008 18:33:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:19.008 18:33:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.008 18:33:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:19.008 ************************************ 00:35:19.008 START TEST raid5f_superblock_test 00:35:19.008 ************************************ 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:35:19.008 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:35:19.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83821 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83821 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83821 ']' 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:19.009 18:33:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:19.268 [2024-12-06 18:33:49.991031] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:19.268 [2024-12-06 18:33:49.991195] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83821 ] 00:35:19.268 [2024-12-06 18:33:50.174821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:19.527 [2024-12-06 18:33:50.305975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:19.786 [2024-12-06 18:33:50.534362] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:19.786 [2024-12-06 18:33:50.534699] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.048 malloc1 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.048 [2024-12-06 18:33:50.884088] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:20.048 [2024-12-06 18:33:50.884312] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:20.048 [2024-12-06 18:33:50.884380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:20.048 [2024-12-06 18:33:50.884470] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:20.048 [2024-12-06 18:33:50.887263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:20.048 [2024-12-06 18:33:50.887399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:20.048 pt1 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.048 malloc2 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.048 [2024-12-06 18:33:50.946476] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:20.048 [2024-12-06 18:33:50.946669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:20.048 [2024-12-06 18:33:50.946738] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:20.048 [2024-12-06 18:33:50.946822] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:20.048 [2024-12-06 18:33:50.949550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:20.048 [2024-12-06 18:33:50.949699] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:20.048 pt2 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.048 18:33:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.308 malloc3 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.308 [2024-12-06 18:33:51.042167] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:20.308 [2024-12-06 18:33:51.042218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:20.308 [2024-12-06 18:33:51.042243] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:20.308 [2024-12-06 18:33:51.042256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:20.308 [2024-12-06 18:33:51.044933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:20.308 [2024-12-06 18:33:51.044976] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:20.308 pt3 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.308 malloc4 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.308 [2024-12-06 18:33:51.105783] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:20.308 [2024-12-06 18:33:51.105845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:20.308 [2024-12-06 18:33:51.105869] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:20.308 [2024-12-06 18:33:51.105881] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:20.308 [2024-12-06 18:33:51.108556] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:20.308 [2024-12-06 18:33:51.108704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:20.308 pt4 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.308 [2024-12-06 18:33:51.117818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:20.308 [2024-12-06 18:33:51.120134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:20.308 [2024-12-06 18:33:51.120244] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:20.308 [2024-12-06 18:33:51.120291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:20.308 [2024-12-06 18:33:51.120493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:20.308 [2024-12-06 18:33:51.120512] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:20.308 [2024-12-06 18:33:51.120789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:35:20.308 [2024-12-06 18:33:51.128003] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:20.308 [2024-12-06 18:33:51.128030] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:20.308 [2024-12-06 18:33:51.128247] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:20.308 "name": "raid_bdev1", 00:35:20.308 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:20.308 "strip_size_kb": 64, 00:35:20.308 "state": "online", 00:35:20.308 "raid_level": "raid5f", 00:35:20.308 "superblock": true, 00:35:20.308 "num_base_bdevs": 4, 00:35:20.308 "num_base_bdevs_discovered": 4, 00:35:20.308 "num_base_bdevs_operational": 4, 00:35:20.308 "base_bdevs_list": [ 00:35:20.308 { 00:35:20.308 "name": "pt1", 00:35:20.308 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:20.308 "is_configured": true, 00:35:20.308 "data_offset": 2048, 00:35:20.308 "data_size": 63488 00:35:20.308 }, 00:35:20.308 { 00:35:20.308 "name": "pt2", 00:35:20.308 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:20.308 "is_configured": true, 00:35:20.308 "data_offset": 2048, 00:35:20.308 "data_size": 63488 00:35:20.308 }, 00:35:20.308 { 00:35:20.308 "name": "pt3", 00:35:20.308 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:20.308 "is_configured": true, 00:35:20.308 "data_offset": 2048, 00:35:20.308 "data_size": 63488 00:35:20.308 }, 00:35:20.308 { 00:35:20.308 "name": "pt4", 00:35:20.308 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:20.308 "is_configured": true, 00:35:20.308 "data_offset": 2048, 00:35:20.308 "data_size": 63488 00:35:20.308 } 00:35:20.308 ] 00:35:20.308 }' 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:20.308 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.877 [2024-12-06 18:33:51.577366] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:20.877 "name": "raid_bdev1", 00:35:20.877 "aliases": [ 00:35:20.877 "88f8478d-3a8d-4e31-b230-a092658e003c" 00:35:20.877 ], 00:35:20.877 "product_name": "Raid Volume", 00:35:20.877 "block_size": 512, 00:35:20.877 "num_blocks": 190464, 00:35:20.877 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:20.877 "assigned_rate_limits": { 00:35:20.877 "rw_ios_per_sec": 0, 00:35:20.877 "rw_mbytes_per_sec": 0, 00:35:20.877 "r_mbytes_per_sec": 0, 00:35:20.877 "w_mbytes_per_sec": 0 00:35:20.877 }, 00:35:20.877 "claimed": false, 00:35:20.877 "zoned": false, 00:35:20.877 "supported_io_types": { 00:35:20.877 "read": true, 00:35:20.877 "write": true, 00:35:20.877 "unmap": false, 00:35:20.877 "flush": false, 00:35:20.877 "reset": true, 00:35:20.877 "nvme_admin": false, 00:35:20.877 "nvme_io": false, 00:35:20.877 "nvme_io_md": false, 00:35:20.877 "write_zeroes": true, 00:35:20.877 "zcopy": false, 00:35:20.877 "get_zone_info": false, 00:35:20.877 "zone_management": false, 00:35:20.877 "zone_append": false, 00:35:20.877 "compare": false, 00:35:20.877 "compare_and_write": false, 00:35:20.877 "abort": false, 00:35:20.877 "seek_hole": false, 00:35:20.877 "seek_data": false, 00:35:20.877 "copy": false, 00:35:20.877 "nvme_iov_md": false 00:35:20.877 }, 00:35:20.877 "driver_specific": { 00:35:20.877 "raid": { 00:35:20.877 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:20.877 "strip_size_kb": 64, 00:35:20.877 "state": "online", 00:35:20.877 "raid_level": "raid5f", 00:35:20.877 "superblock": true, 00:35:20.877 "num_base_bdevs": 4, 00:35:20.877 "num_base_bdevs_discovered": 4, 00:35:20.877 "num_base_bdevs_operational": 4, 00:35:20.877 "base_bdevs_list": [ 00:35:20.877 { 00:35:20.877 "name": "pt1", 00:35:20.877 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:20.877 "is_configured": true, 00:35:20.877 "data_offset": 2048, 00:35:20.877 "data_size": 63488 00:35:20.877 }, 00:35:20.877 { 00:35:20.877 "name": "pt2", 00:35:20.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:20.877 "is_configured": true, 00:35:20.877 "data_offset": 2048, 00:35:20.877 "data_size": 63488 00:35:20.877 }, 00:35:20.877 { 00:35:20.877 "name": "pt3", 00:35:20.877 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:20.877 "is_configured": true, 00:35:20.877 "data_offset": 2048, 00:35:20.877 "data_size": 63488 00:35:20.877 }, 00:35:20.877 { 00:35:20.877 "name": "pt4", 00:35:20.877 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:20.877 "is_configured": true, 00:35:20.877 "data_offset": 2048, 00:35:20.877 "data_size": 63488 00:35:20.877 } 00:35:20.877 ] 00:35:20.877 } 00:35:20.877 } 00:35:20.877 }' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:20.877 pt2 00:35:20.877 pt3 00:35:20.877 pt4' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:20.877 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:35:21.137 [2024-12-06 18:33:51.876803] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=88f8478d-3a8d-4e31-b230-a092658e003c 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 88f8478d-3a8d-4e31-b230-a092658e003c ']' 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.137 [2024-12-06 18:33:51.924567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:21.137 [2024-12-06 18:33:51.924595] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:21.137 [2024-12-06 18:33:51.924753] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:21.137 [2024-12-06 18:33:51.924849] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:21.137 [2024-12-06 18:33:51.924869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.137 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.138 18:33:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.138 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.138 [2024-12-06 18:33:52.076380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:35:21.138 [2024-12-06 18:33:52.078769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:35:21.138 [2024-12-06 18:33:52.078821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:35:21.138 [2024-12-06 18:33:52.078857] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:35:21.138 [2024-12-06 18:33:52.078912] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:35:21.138 [2024-12-06 18:33:52.078967] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:35:21.138 [2024-12-06 18:33:52.078990] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:35:21.138 [2024-12-06 18:33:52.079013] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:35:21.138 [2024-12-06 18:33:52.079030] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:21.138 [2024-12-06 18:33:52.079043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:35:21.138 request: 00:35:21.138 { 00:35:21.138 "name": "raid_bdev1", 00:35:21.138 "raid_level": "raid5f", 00:35:21.138 "base_bdevs": [ 00:35:21.138 "malloc1", 00:35:21.138 "malloc2", 00:35:21.138 "malloc3", 00:35:21.138 "malloc4" 00:35:21.138 ], 00:35:21.138 "strip_size_kb": 64, 00:35:21.398 "superblock": false, 00:35:21.398 "method": "bdev_raid_create", 00:35:21.398 "req_id": 1 00:35:21.398 } 00:35:21.398 Got JSON-RPC error response 00:35:21.398 response: 00:35:21.398 { 00:35:21.398 "code": -17, 00:35:21.398 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:35:21.398 } 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.398 [2024-12-06 18:33:52.144288] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:21.398 [2024-12-06 18:33:52.144346] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:21.398 [2024-12-06 18:33:52.144366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:35:21.398 [2024-12-06 18:33:52.144380] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:21.398 [2024-12-06 18:33:52.147157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:21.398 [2024-12-06 18:33:52.147201] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:21.398 [2024-12-06 18:33:52.147279] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:21.398 [2024-12-06 18:33:52.147343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:21.398 pt1 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:21.398 "name": "raid_bdev1", 00:35:21.398 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:21.398 "strip_size_kb": 64, 00:35:21.398 "state": "configuring", 00:35:21.398 "raid_level": "raid5f", 00:35:21.398 "superblock": true, 00:35:21.398 "num_base_bdevs": 4, 00:35:21.398 "num_base_bdevs_discovered": 1, 00:35:21.398 "num_base_bdevs_operational": 4, 00:35:21.398 "base_bdevs_list": [ 00:35:21.398 { 00:35:21.398 "name": "pt1", 00:35:21.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:21.398 "is_configured": true, 00:35:21.398 "data_offset": 2048, 00:35:21.398 "data_size": 63488 00:35:21.398 }, 00:35:21.398 { 00:35:21.398 "name": null, 00:35:21.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:21.398 "is_configured": false, 00:35:21.398 "data_offset": 2048, 00:35:21.398 "data_size": 63488 00:35:21.398 }, 00:35:21.398 { 00:35:21.398 "name": null, 00:35:21.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:21.398 "is_configured": false, 00:35:21.398 "data_offset": 2048, 00:35:21.398 "data_size": 63488 00:35:21.398 }, 00:35:21.398 { 00:35:21.398 "name": null, 00:35:21.398 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:21.398 "is_configured": false, 00:35:21.398 "data_offset": 2048, 00:35:21.398 "data_size": 63488 00:35:21.398 } 00:35:21.398 ] 00:35:21.398 }' 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:21.398 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.658 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:35:21.658 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:21.658 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.658 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.918 [2024-12-06 18:33:52.607680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:21.918 [2024-12-06 18:33:52.607872] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:21.918 [2024-12-06 18:33:52.607969] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:21.918 [2024-12-06 18:33:52.608058] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:21.918 [2024-12-06 18:33:52.608602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:21.918 [2024-12-06 18:33:52.608744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:21.918 [2024-12-06 18:33:52.608934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:21.918 [2024-12-06 18:33:52.609057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:21.918 pt2 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.918 [2024-12-06 18:33:52.619666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:21.918 "name": "raid_bdev1", 00:35:21.918 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:21.918 "strip_size_kb": 64, 00:35:21.918 "state": "configuring", 00:35:21.918 "raid_level": "raid5f", 00:35:21.918 "superblock": true, 00:35:21.918 "num_base_bdevs": 4, 00:35:21.918 "num_base_bdevs_discovered": 1, 00:35:21.918 "num_base_bdevs_operational": 4, 00:35:21.918 "base_bdevs_list": [ 00:35:21.918 { 00:35:21.918 "name": "pt1", 00:35:21.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:21.918 "is_configured": true, 00:35:21.918 "data_offset": 2048, 00:35:21.918 "data_size": 63488 00:35:21.918 }, 00:35:21.918 { 00:35:21.918 "name": null, 00:35:21.918 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:21.918 "is_configured": false, 00:35:21.918 "data_offset": 0, 00:35:21.918 "data_size": 63488 00:35:21.918 }, 00:35:21.918 { 00:35:21.918 "name": null, 00:35:21.918 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:21.918 "is_configured": false, 00:35:21.918 "data_offset": 2048, 00:35:21.918 "data_size": 63488 00:35:21.918 }, 00:35:21.918 { 00:35:21.918 "name": null, 00:35:21.918 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:21.918 "is_configured": false, 00:35:21.918 "data_offset": 2048, 00:35:21.918 "data_size": 63488 00:35:21.918 } 00:35:21.918 ] 00:35:21.918 }' 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:21.918 18:33:52 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.179 [2024-12-06 18:33:53.039045] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:22.179 [2024-12-06 18:33:53.039250] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.179 [2024-12-06 18:33:53.039278] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:35:22.179 [2024-12-06 18:33:53.039290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.179 [2024-12-06 18:33:53.039748] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.179 [2024-12-06 18:33:53.039766] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:22.179 [2024-12-06 18:33:53.039838] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:22.179 [2024-12-06 18:33:53.039858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:22.179 pt2 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.179 [2024-12-06 18:33:53.051027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:22.179 [2024-12-06 18:33:53.051194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.179 [2024-12-06 18:33:53.051230] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:35:22.179 [2024-12-06 18:33:53.051243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.179 [2024-12-06 18:33:53.051636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.179 [2024-12-06 18:33:53.051658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:22.179 [2024-12-06 18:33:53.051721] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:22.179 [2024-12-06 18:33:53.051747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:22.179 pt3 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.179 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.179 [2024-12-06 18:33:53.062985] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:22.179 [2024-12-06 18:33:53.063028] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:22.179 [2024-12-06 18:33:53.063047] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:35:22.179 [2024-12-06 18:33:53.063057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:22.179 [2024-12-06 18:33:53.063510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:22.179 [2024-12-06 18:33:53.063529] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:22.179 [2024-12-06 18:33:53.063589] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:22.179 [2024-12-06 18:33:53.063611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:22.179 [2024-12-06 18:33:53.063752] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:35:22.179 [2024-12-06 18:33:53.063763] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:22.179 [2024-12-06 18:33:53.064027] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:22.179 [2024-12-06 18:33:53.071570] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:35:22.180 [2024-12-06 18:33:53.071596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:35:22.180 [2024-12-06 18:33:53.071771] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:22.180 pt4 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:22.180 "name": "raid_bdev1", 00:35:22.180 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:22.180 "strip_size_kb": 64, 00:35:22.180 "state": "online", 00:35:22.180 "raid_level": "raid5f", 00:35:22.180 "superblock": true, 00:35:22.180 "num_base_bdevs": 4, 00:35:22.180 "num_base_bdevs_discovered": 4, 00:35:22.180 "num_base_bdevs_operational": 4, 00:35:22.180 "base_bdevs_list": [ 00:35:22.180 { 00:35:22.180 "name": "pt1", 00:35:22.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:22.180 "is_configured": true, 00:35:22.180 "data_offset": 2048, 00:35:22.180 "data_size": 63488 00:35:22.180 }, 00:35:22.180 { 00:35:22.180 "name": "pt2", 00:35:22.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:22.180 "is_configured": true, 00:35:22.180 "data_offset": 2048, 00:35:22.180 "data_size": 63488 00:35:22.180 }, 00:35:22.180 { 00:35:22.180 "name": "pt3", 00:35:22.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:22.180 "is_configured": true, 00:35:22.180 "data_offset": 2048, 00:35:22.180 "data_size": 63488 00:35:22.180 }, 00:35:22.180 { 00:35:22.180 "name": "pt4", 00:35:22.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:22.180 "is_configured": true, 00:35:22.180 "data_offset": 2048, 00:35:22.180 "data_size": 63488 00:35:22.180 } 00:35:22.180 ] 00:35:22.180 }' 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:22.180 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.749 [2024-12-06 18:33:53.496799] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.749 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:35:22.749 "name": "raid_bdev1", 00:35:22.749 "aliases": [ 00:35:22.749 "88f8478d-3a8d-4e31-b230-a092658e003c" 00:35:22.749 ], 00:35:22.749 "product_name": "Raid Volume", 00:35:22.749 "block_size": 512, 00:35:22.749 "num_blocks": 190464, 00:35:22.749 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:22.749 "assigned_rate_limits": { 00:35:22.749 "rw_ios_per_sec": 0, 00:35:22.749 "rw_mbytes_per_sec": 0, 00:35:22.749 "r_mbytes_per_sec": 0, 00:35:22.749 "w_mbytes_per_sec": 0 00:35:22.749 }, 00:35:22.749 "claimed": false, 00:35:22.749 "zoned": false, 00:35:22.749 "supported_io_types": { 00:35:22.749 "read": true, 00:35:22.749 "write": true, 00:35:22.749 "unmap": false, 00:35:22.749 "flush": false, 00:35:22.749 "reset": true, 00:35:22.749 "nvme_admin": false, 00:35:22.749 "nvme_io": false, 00:35:22.749 "nvme_io_md": false, 00:35:22.749 "write_zeroes": true, 00:35:22.749 "zcopy": false, 00:35:22.749 "get_zone_info": false, 00:35:22.750 "zone_management": false, 00:35:22.750 "zone_append": false, 00:35:22.750 "compare": false, 00:35:22.750 "compare_and_write": false, 00:35:22.750 "abort": false, 00:35:22.750 "seek_hole": false, 00:35:22.750 "seek_data": false, 00:35:22.750 "copy": false, 00:35:22.750 "nvme_iov_md": false 00:35:22.750 }, 00:35:22.750 "driver_specific": { 00:35:22.750 "raid": { 00:35:22.750 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:22.750 "strip_size_kb": 64, 00:35:22.750 "state": "online", 00:35:22.750 "raid_level": "raid5f", 00:35:22.750 "superblock": true, 00:35:22.750 "num_base_bdevs": 4, 00:35:22.750 "num_base_bdevs_discovered": 4, 00:35:22.750 "num_base_bdevs_operational": 4, 00:35:22.750 "base_bdevs_list": [ 00:35:22.750 { 00:35:22.750 "name": "pt1", 00:35:22.750 "uuid": "00000000-0000-0000-0000-000000000001", 00:35:22.750 "is_configured": true, 00:35:22.750 "data_offset": 2048, 00:35:22.750 "data_size": 63488 00:35:22.750 }, 00:35:22.750 { 00:35:22.750 "name": "pt2", 00:35:22.750 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:22.750 "is_configured": true, 00:35:22.750 "data_offset": 2048, 00:35:22.750 "data_size": 63488 00:35:22.750 }, 00:35:22.750 { 00:35:22.750 "name": "pt3", 00:35:22.750 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:22.750 "is_configured": true, 00:35:22.750 "data_offset": 2048, 00:35:22.750 "data_size": 63488 00:35:22.750 }, 00:35:22.750 { 00:35:22.750 "name": "pt4", 00:35:22.750 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:22.750 "is_configured": true, 00:35:22.750 "data_offset": 2048, 00:35:22.750 "data_size": 63488 00:35:22.750 } 00:35:22.750 ] 00:35:22.750 } 00:35:22.750 } 00:35:22.750 }' 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:35:22.750 pt2 00:35:22.750 pt3 00:35:22.750 pt4' 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:22.750 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.010 [2024-12-06 18:33:53.804331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 88f8478d-3a8d-4e31-b230-a092658e003c '!=' 88f8478d-3a8d-4e31-b230-a092658e003c ']' 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.010 [2024-12-06 18:33:53.848205] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.010 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:23.010 "name": "raid_bdev1", 00:35:23.010 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:23.010 "strip_size_kb": 64, 00:35:23.010 "state": "online", 00:35:23.010 "raid_level": "raid5f", 00:35:23.010 "superblock": true, 00:35:23.010 "num_base_bdevs": 4, 00:35:23.010 "num_base_bdevs_discovered": 3, 00:35:23.010 "num_base_bdevs_operational": 3, 00:35:23.010 "base_bdevs_list": [ 00:35:23.010 { 00:35:23.010 "name": null, 00:35:23.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:23.010 "is_configured": false, 00:35:23.010 "data_offset": 0, 00:35:23.010 "data_size": 63488 00:35:23.010 }, 00:35:23.010 { 00:35:23.010 "name": "pt2", 00:35:23.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:23.010 "is_configured": true, 00:35:23.010 "data_offset": 2048, 00:35:23.010 "data_size": 63488 00:35:23.010 }, 00:35:23.010 { 00:35:23.010 "name": "pt3", 00:35:23.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:23.010 "is_configured": true, 00:35:23.010 "data_offset": 2048, 00:35:23.010 "data_size": 63488 00:35:23.010 }, 00:35:23.010 { 00:35:23.011 "name": "pt4", 00:35:23.011 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:23.011 "is_configured": true, 00:35:23.011 "data_offset": 2048, 00:35:23.011 "data_size": 63488 00:35:23.011 } 00:35:23.011 ] 00:35:23.011 }' 00:35:23.011 18:33:53 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:23.011 18:33:53 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.580 [2024-12-06 18:33:54.259518] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:23.580 [2024-12-06 18:33:54.259548] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:23.580 [2024-12-06 18:33:54.259620] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:23.580 [2024-12-06 18:33:54.259701] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:23.580 [2024-12-06 18:33:54.259713] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.580 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.581 [2024-12-06 18:33:54.355383] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:35:23.581 [2024-12-06 18:33:54.355543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.581 [2024-12-06 18:33:54.355575] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:35:23.581 [2024-12-06 18:33:54.355587] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.581 [2024-12-06 18:33:54.358393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.581 [2024-12-06 18:33:54.358430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:35:23.581 [2024-12-06 18:33:54.358518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:35:23.581 [2024-12-06 18:33:54.358564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:23.581 pt2 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:23.581 "name": "raid_bdev1", 00:35:23.581 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:23.581 "strip_size_kb": 64, 00:35:23.581 "state": "configuring", 00:35:23.581 "raid_level": "raid5f", 00:35:23.581 "superblock": true, 00:35:23.581 "num_base_bdevs": 4, 00:35:23.581 "num_base_bdevs_discovered": 1, 00:35:23.581 "num_base_bdevs_operational": 3, 00:35:23.581 "base_bdevs_list": [ 00:35:23.581 { 00:35:23.581 "name": null, 00:35:23.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:23.581 "is_configured": false, 00:35:23.581 "data_offset": 2048, 00:35:23.581 "data_size": 63488 00:35:23.581 }, 00:35:23.581 { 00:35:23.581 "name": "pt2", 00:35:23.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:23.581 "is_configured": true, 00:35:23.581 "data_offset": 2048, 00:35:23.581 "data_size": 63488 00:35:23.581 }, 00:35:23.581 { 00:35:23.581 "name": null, 00:35:23.581 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:23.581 "is_configured": false, 00:35:23.581 "data_offset": 2048, 00:35:23.581 "data_size": 63488 00:35:23.581 }, 00:35:23.581 { 00:35:23.581 "name": null, 00:35:23.581 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:23.581 "is_configured": false, 00:35:23.581 "data_offset": 2048, 00:35:23.581 "data_size": 63488 00:35:23.581 } 00:35:23.581 ] 00:35:23.581 }' 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:23.581 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:23.841 [2024-12-06 18:33:54.750870] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:35:23.841 [2024-12-06 18:33:54.751076] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:23.841 [2024-12-06 18:33:54.751196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:35:23.841 [2024-12-06 18:33:54.751290] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:23.841 [2024-12-06 18:33:54.751813] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:23.841 [2024-12-06 18:33:54.751947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:35:23.841 [2024-12-06 18:33:54.752126] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:35:23.841 [2024-12-06 18:33:54.752243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:23.841 pt3 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.841 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.100 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.100 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:24.100 "name": "raid_bdev1", 00:35:24.100 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:24.100 "strip_size_kb": 64, 00:35:24.100 "state": "configuring", 00:35:24.100 "raid_level": "raid5f", 00:35:24.100 "superblock": true, 00:35:24.100 "num_base_bdevs": 4, 00:35:24.100 "num_base_bdevs_discovered": 2, 00:35:24.100 "num_base_bdevs_operational": 3, 00:35:24.100 "base_bdevs_list": [ 00:35:24.100 { 00:35:24.100 "name": null, 00:35:24.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:24.100 "is_configured": false, 00:35:24.100 "data_offset": 2048, 00:35:24.100 "data_size": 63488 00:35:24.100 }, 00:35:24.100 { 00:35:24.100 "name": "pt2", 00:35:24.100 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:24.100 "is_configured": true, 00:35:24.100 "data_offset": 2048, 00:35:24.100 "data_size": 63488 00:35:24.100 }, 00:35:24.100 { 00:35:24.100 "name": "pt3", 00:35:24.100 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:24.100 "is_configured": true, 00:35:24.100 "data_offset": 2048, 00:35:24.100 "data_size": 63488 00:35:24.100 }, 00:35:24.100 { 00:35:24.100 "name": null, 00:35:24.100 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:24.100 "is_configured": false, 00:35:24.100 "data_offset": 2048, 00:35:24.100 "data_size": 63488 00:35:24.100 } 00:35:24.100 ] 00:35:24.100 }' 00:35:24.100 18:33:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:24.100 18:33:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.360 [2024-12-06 18:33:55.154373] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:24.360 [2024-12-06 18:33:55.154568] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:24.360 [2024-12-06 18:33:55.154613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:35:24.360 [2024-12-06 18:33:55.154625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:24.360 [2024-12-06 18:33:55.155116] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:24.360 [2024-12-06 18:33:55.155135] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:24.360 [2024-12-06 18:33:55.155233] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:24.360 [2024-12-06 18:33:55.155265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:24.360 [2024-12-06 18:33:55.155417] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:35:24.360 [2024-12-06 18:33:55.155432] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:24.360 [2024-12-06 18:33:55.155717] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:35:24.360 [2024-12-06 18:33:55.163087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:35:24.360 [2024-12-06 18:33:55.163116] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:35:24.360 [2024-12-06 18:33:55.163446] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:24.360 pt4 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.360 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:24.360 "name": "raid_bdev1", 00:35:24.360 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:24.360 "strip_size_kb": 64, 00:35:24.360 "state": "online", 00:35:24.360 "raid_level": "raid5f", 00:35:24.360 "superblock": true, 00:35:24.360 "num_base_bdevs": 4, 00:35:24.360 "num_base_bdevs_discovered": 3, 00:35:24.360 "num_base_bdevs_operational": 3, 00:35:24.360 "base_bdevs_list": [ 00:35:24.360 { 00:35:24.360 "name": null, 00:35:24.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:24.360 "is_configured": false, 00:35:24.360 "data_offset": 2048, 00:35:24.360 "data_size": 63488 00:35:24.360 }, 00:35:24.360 { 00:35:24.360 "name": "pt2", 00:35:24.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:24.360 "is_configured": true, 00:35:24.360 "data_offset": 2048, 00:35:24.360 "data_size": 63488 00:35:24.360 }, 00:35:24.361 { 00:35:24.361 "name": "pt3", 00:35:24.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:24.361 "is_configured": true, 00:35:24.361 "data_offset": 2048, 00:35:24.361 "data_size": 63488 00:35:24.361 }, 00:35:24.361 { 00:35:24.361 "name": "pt4", 00:35:24.361 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:24.361 "is_configured": true, 00:35:24.361 "data_offset": 2048, 00:35:24.361 "data_size": 63488 00:35:24.361 } 00:35:24.361 ] 00:35:24.361 }' 00:35:24.361 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:24.361 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.618 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:24.618 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.618 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.618 [2024-12-06 18:33:55.564572] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:24.618 [2024-12-06 18:33:55.564720] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:24.618 [2024-12-06 18:33:55.564811] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:24.618 [2024-12-06 18:33:55.564888] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:24.618 [2024-12-06 18:33:55.564904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.877 [2024-12-06 18:33:55.636471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:35:24.877 [2024-12-06 18:33:55.636534] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:24.877 [2024-12-06 18:33:55.636562] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:35:24.877 [2024-12-06 18:33:55.636579] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:24.877 [2024-12-06 18:33:55.639467] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:24.877 [2024-12-06 18:33:55.639615] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:35:24.877 [2024-12-06 18:33:55.639711] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:35:24.877 [2024-12-06 18:33:55.639767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:35:24.877 [2024-12-06 18:33:55.639909] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:35:24.877 [2024-12-06 18:33:55.639925] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:24.877 [2024-12-06 18:33:55.639941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:35:24.877 [2024-12-06 18:33:55.640009] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:35:24.877 [2024-12-06 18:33:55.640109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:35:24.877 pt1 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.877 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:24.877 "name": "raid_bdev1", 00:35:24.877 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:24.877 "strip_size_kb": 64, 00:35:24.877 "state": "configuring", 00:35:24.877 "raid_level": "raid5f", 00:35:24.877 "superblock": true, 00:35:24.877 "num_base_bdevs": 4, 00:35:24.877 "num_base_bdevs_discovered": 2, 00:35:24.877 "num_base_bdevs_operational": 3, 00:35:24.877 "base_bdevs_list": [ 00:35:24.877 { 00:35:24.877 "name": null, 00:35:24.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:24.877 "is_configured": false, 00:35:24.877 "data_offset": 2048, 00:35:24.877 "data_size": 63488 00:35:24.877 }, 00:35:24.877 { 00:35:24.877 "name": "pt2", 00:35:24.877 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:24.877 "is_configured": true, 00:35:24.877 "data_offset": 2048, 00:35:24.877 "data_size": 63488 00:35:24.878 }, 00:35:24.878 { 00:35:24.878 "name": "pt3", 00:35:24.878 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:24.878 "is_configured": true, 00:35:24.878 "data_offset": 2048, 00:35:24.878 "data_size": 63488 00:35:24.878 }, 00:35:24.878 { 00:35:24.878 "name": null, 00:35:24.878 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:24.878 "is_configured": false, 00:35:24.878 "data_offset": 2048, 00:35:24.878 "data_size": 63488 00:35:24.878 } 00:35:24.878 ] 00:35:24.878 }' 00:35:24.878 18:33:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:24.878 18:33:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.135 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:35:25.135 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.135 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.136 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.395 [2024-12-06 18:33:56.119837] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:35:25.395 [2024-12-06 18:33:56.119903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:25.395 [2024-12-06 18:33:56.119941] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:35:25.395 [2024-12-06 18:33:56.119954] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:25.395 [2024-12-06 18:33:56.120496] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:25.395 [2024-12-06 18:33:56.120518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:35:25.395 [2024-12-06 18:33:56.120610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:35:25.395 [2024-12-06 18:33:56.120634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:35:25.395 [2024-12-06 18:33:56.120774] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:35:25.395 [2024-12-06 18:33:56.120785] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:25.395 [2024-12-06 18:33:56.121079] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:25.395 [2024-12-06 18:33:56.128590] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:35:25.395 [2024-12-06 18:33:56.128619] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:35:25.395 [2024-12-06 18:33:56.128902] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:25.395 pt4 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:25.395 "name": "raid_bdev1", 00:35:25.395 "uuid": "88f8478d-3a8d-4e31-b230-a092658e003c", 00:35:25.395 "strip_size_kb": 64, 00:35:25.395 "state": "online", 00:35:25.395 "raid_level": "raid5f", 00:35:25.395 "superblock": true, 00:35:25.395 "num_base_bdevs": 4, 00:35:25.395 "num_base_bdevs_discovered": 3, 00:35:25.395 "num_base_bdevs_operational": 3, 00:35:25.395 "base_bdevs_list": [ 00:35:25.395 { 00:35:25.395 "name": null, 00:35:25.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:25.395 "is_configured": false, 00:35:25.395 "data_offset": 2048, 00:35:25.395 "data_size": 63488 00:35:25.395 }, 00:35:25.395 { 00:35:25.395 "name": "pt2", 00:35:25.395 "uuid": "00000000-0000-0000-0000-000000000002", 00:35:25.395 "is_configured": true, 00:35:25.395 "data_offset": 2048, 00:35:25.395 "data_size": 63488 00:35:25.395 }, 00:35:25.395 { 00:35:25.395 "name": "pt3", 00:35:25.395 "uuid": "00000000-0000-0000-0000-000000000003", 00:35:25.395 "is_configured": true, 00:35:25.395 "data_offset": 2048, 00:35:25.395 "data_size": 63488 00:35:25.395 }, 00:35:25.395 { 00:35:25.395 "name": "pt4", 00:35:25.395 "uuid": "00000000-0000-0000-0000-000000000004", 00:35:25.395 "is_configured": true, 00:35:25.395 "data_offset": 2048, 00:35:25.395 "data_size": 63488 00:35:25.395 } 00:35:25.395 ] 00:35:25.395 }' 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:25.395 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:25.654 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:35:25.654 [2024-12-06 18:33:56.594414] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 88f8478d-3a8d-4e31-b230-a092658e003c '!=' 88f8478d-3a8d-4e31-b230-a092658e003c ']' 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83821 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83821 ']' 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83821 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83821 00:35:25.913 killing process with pid 83821 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83821' 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83821 00:35:25.913 [2024-12-06 18:33:56.673369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:25.913 [2024-12-06 18:33:56.673455] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:25.913 18:33:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83821 00:35:25.913 [2024-12-06 18:33:56.673534] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:25.913 [2024-12-06 18:33:56.673553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:35:26.172 [2024-12-06 18:33:57.097465] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:27.563 18:33:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:35:27.563 00:35:27.563 real 0m8.419s 00:35:27.563 user 0m12.928s 00:35:27.563 sys 0m1.824s 00:35:27.563 ************************************ 00:35:27.563 END TEST raid5f_superblock_test 00:35:27.563 ************************************ 00:35:27.563 18:33:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:27.563 18:33:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.563 18:33:58 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:35:27.563 18:33:58 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:35:27.563 18:33:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:35:27.563 18:33:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:27.563 18:33:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:27.563 ************************************ 00:35:27.563 START TEST raid5f_rebuild_test 00:35:27.563 ************************************ 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84317 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84317 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84317 ']' 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:27.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:27.563 18:33:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:27.563 [2024-12-06 18:33:58.494519] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:27.563 [2024-12-06 18:33:58.494901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:35:27.563 Zero copy mechanism will not be used. 00:35:27.563 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84317 ] 00:35:27.822 [2024-12-06 18:33:58.678825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:28.080 [2024-12-06 18:33:58.805952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:28.339 [2024-12-06 18:33:59.037888] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:28.339 [2024-12-06 18:33:59.038189] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.599 BaseBdev1_malloc 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.599 [2024-12-06 18:33:59.376756] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:28.599 [2024-12-06 18:33:59.376988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.599 [2024-12-06 18:33:59.377058] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:28.599 [2024-12-06 18:33:59.377163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.599 [2024-12-06 18:33:59.380073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.599 BaseBdev1 00:35:28.599 [2024-12-06 18:33:59.380245] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.599 BaseBdev2_malloc 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.599 [2024-12-06 18:33:59.438179] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:28.599 [2024-12-06 18:33:59.438371] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.599 [2024-12-06 18:33:59.438435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:28.599 [2024-12-06 18:33:59.438527] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.599 [2024-12-06 18:33:59.441247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.599 [2024-12-06 18:33:59.441412] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:28.599 BaseBdev2 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.599 BaseBdev3_malloc 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.599 [2024-12-06 18:33:59.529128] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:28.599 [2024-12-06 18:33:59.529325] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.599 [2024-12-06 18:33:59.529360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:28.599 [2024-12-06 18:33:59.529377] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.599 [2024-12-06 18:33:59.532145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.599 [2024-12-06 18:33:59.532204] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:28.599 BaseBdev3 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.599 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.858 BaseBdev4_malloc 00:35:28.858 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.858 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:28.858 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.858 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.858 [2024-12-06 18:33:59.589899] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:28.858 [2024-12-06 18:33:59.590079] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.858 [2024-12-06 18:33:59.590139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:28.858 [2024-12-06 18:33:59.590239] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.858 [2024-12-06 18:33:59.593009] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.858 [2024-12-06 18:33:59.593171] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:28.858 BaseBdev4 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.859 spare_malloc 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.859 spare_delay 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.859 [2024-12-06 18:33:59.662675] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:28.859 [2024-12-06 18:33:59.662728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:28.859 [2024-12-06 18:33:59.662747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:28.859 [2024-12-06 18:33:59.662762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:28.859 [2024-12-06 18:33:59.665388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:28.859 [2024-12-06 18:33:59.665428] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:28.859 spare 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.859 [2024-12-06 18:33:59.674715] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:28.859 [2024-12-06 18:33:59.677247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:28.859 [2024-12-06 18:33:59.677461] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:28.859 [2024-12-06 18:33:59.677559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:28.859 [2024-12-06 18:33:59.677750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:28.859 [2024-12-06 18:33:59.677774] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:35:28.859 [2024-12-06 18:33:59.678075] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:28.859 [2024-12-06 18:33:59.686183] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:28.859 [2024-12-06 18:33:59.686204] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:28.859 [2024-12-06 18:33:59.686407] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:28.859 "name": "raid_bdev1", 00:35:28.859 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:28.859 "strip_size_kb": 64, 00:35:28.859 "state": "online", 00:35:28.859 "raid_level": "raid5f", 00:35:28.859 "superblock": false, 00:35:28.859 "num_base_bdevs": 4, 00:35:28.859 "num_base_bdevs_discovered": 4, 00:35:28.859 "num_base_bdevs_operational": 4, 00:35:28.859 "base_bdevs_list": [ 00:35:28.859 { 00:35:28.859 "name": "BaseBdev1", 00:35:28.859 "uuid": "cf1c704a-27df-567d-bb83-097e222aef69", 00:35:28.859 "is_configured": true, 00:35:28.859 "data_offset": 0, 00:35:28.859 "data_size": 65536 00:35:28.859 }, 00:35:28.859 { 00:35:28.859 "name": "BaseBdev2", 00:35:28.859 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:28.859 "is_configured": true, 00:35:28.859 "data_offset": 0, 00:35:28.859 "data_size": 65536 00:35:28.859 }, 00:35:28.859 { 00:35:28.859 "name": "BaseBdev3", 00:35:28.859 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:28.859 "is_configured": true, 00:35:28.859 "data_offset": 0, 00:35:28.859 "data_size": 65536 00:35:28.859 }, 00:35:28.859 { 00:35:28.859 "name": "BaseBdev4", 00:35:28.859 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:28.859 "is_configured": true, 00:35:28.859 "data_offset": 0, 00:35:28.859 "data_size": 65536 00:35:28.859 } 00:35:28.859 ] 00:35:28.859 }' 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:28.859 18:33:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.426 [2024-12-06 18:34:00.143842] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:29.426 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:29.685 [2024-12-06 18:34:00.415381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:29.685 /dev/nbd0 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:29.685 1+0 records in 00:35:29.685 1+0 records out 00:35:29.685 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000369036 s, 11.1 MB/s 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:35:29.685 18:34:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:35:30.252 512+0 records in 00:35:30.253 512+0 records out 00:35:30.253 100663296 bytes (101 MB, 96 MiB) copied, 0.508757 s, 198 MB/s 00:35:30.253 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:35:30.253 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:30.253 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:30.253 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:30.253 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:30.253 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:30.253 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:30.512 [2024-12-06 18:34:01.207756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.512 [2024-12-06 18:34:01.241735] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:30.512 "name": "raid_bdev1", 00:35:30.512 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:30.512 "strip_size_kb": 64, 00:35:30.512 "state": "online", 00:35:30.512 "raid_level": "raid5f", 00:35:30.512 "superblock": false, 00:35:30.512 "num_base_bdevs": 4, 00:35:30.512 "num_base_bdevs_discovered": 3, 00:35:30.512 "num_base_bdevs_operational": 3, 00:35:30.512 "base_bdevs_list": [ 00:35:30.512 { 00:35:30.512 "name": null, 00:35:30.512 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:30.512 "is_configured": false, 00:35:30.512 "data_offset": 0, 00:35:30.512 "data_size": 65536 00:35:30.512 }, 00:35:30.512 { 00:35:30.512 "name": "BaseBdev2", 00:35:30.512 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:30.512 "is_configured": true, 00:35:30.512 "data_offset": 0, 00:35:30.512 "data_size": 65536 00:35:30.512 }, 00:35:30.512 { 00:35:30.512 "name": "BaseBdev3", 00:35:30.512 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:30.512 "is_configured": true, 00:35:30.512 "data_offset": 0, 00:35:30.512 "data_size": 65536 00:35:30.512 }, 00:35:30.512 { 00:35:30.512 "name": "BaseBdev4", 00:35:30.512 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:30.512 "is_configured": true, 00:35:30.512 "data_offset": 0, 00:35:30.512 "data_size": 65536 00:35:30.512 } 00:35:30.512 ] 00:35:30.512 }' 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:30.512 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.772 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:30.772 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.772 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:30.772 [2024-12-06 18:34:01.649208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:30.772 [2024-12-06 18:34:01.665652] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:35:30.772 18:34:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.772 18:34:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:30.772 [2024-12-06 18:34:01.675491] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.154 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:32.155 "name": "raid_bdev1", 00:35:32.155 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:32.155 "strip_size_kb": 64, 00:35:32.155 "state": "online", 00:35:32.155 "raid_level": "raid5f", 00:35:32.155 "superblock": false, 00:35:32.155 "num_base_bdevs": 4, 00:35:32.155 "num_base_bdevs_discovered": 4, 00:35:32.155 "num_base_bdevs_operational": 4, 00:35:32.155 "process": { 00:35:32.155 "type": "rebuild", 00:35:32.155 "target": "spare", 00:35:32.155 "progress": { 00:35:32.155 "blocks": 19200, 00:35:32.155 "percent": 9 00:35:32.155 } 00:35:32.155 }, 00:35:32.155 "base_bdevs_list": [ 00:35:32.155 { 00:35:32.155 "name": "spare", 00:35:32.155 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:32.155 "is_configured": true, 00:35:32.155 "data_offset": 0, 00:35:32.155 "data_size": 65536 00:35:32.155 }, 00:35:32.155 { 00:35:32.155 "name": "BaseBdev2", 00:35:32.155 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:32.155 "is_configured": true, 00:35:32.155 "data_offset": 0, 00:35:32.155 "data_size": 65536 00:35:32.155 }, 00:35:32.155 { 00:35:32.155 "name": "BaseBdev3", 00:35:32.155 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:32.155 "is_configured": true, 00:35:32.155 "data_offset": 0, 00:35:32.155 "data_size": 65536 00:35:32.155 }, 00:35:32.155 { 00:35:32.155 "name": "BaseBdev4", 00:35:32.155 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:32.155 "is_configured": true, 00:35:32.155 "data_offset": 0, 00:35:32.155 "data_size": 65536 00:35:32.155 } 00:35:32.155 ] 00:35:32.155 }' 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.155 [2024-12-06 18:34:02.819318] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:32.155 [2024-12-06 18:34:02.883362] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:32.155 [2024-12-06 18:34:02.883433] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:32.155 [2024-12-06 18:34:02.883454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:32.155 [2024-12-06 18:34:02.883467] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:32.155 "name": "raid_bdev1", 00:35:32.155 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:32.155 "strip_size_kb": 64, 00:35:32.155 "state": "online", 00:35:32.155 "raid_level": "raid5f", 00:35:32.155 "superblock": false, 00:35:32.155 "num_base_bdevs": 4, 00:35:32.155 "num_base_bdevs_discovered": 3, 00:35:32.155 "num_base_bdevs_operational": 3, 00:35:32.155 "base_bdevs_list": [ 00:35:32.155 { 00:35:32.155 "name": null, 00:35:32.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.155 "is_configured": false, 00:35:32.155 "data_offset": 0, 00:35:32.155 "data_size": 65536 00:35:32.155 }, 00:35:32.155 { 00:35:32.155 "name": "BaseBdev2", 00:35:32.155 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:32.155 "is_configured": true, 00:35:32.155 "data_offset": 0, 00:35:32.155 "data_size": 65536 00:35:32.155 }, 00:35:32.155 { 00:35:32.155 "name": "BaseBdev3", 00:35:32.155 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:32.155 "is_configured": true, 00:35:32.155 "data_offset": 0, 00:35:32.155 "data_size": 65536 00:35:32.155 }, 00:35:32.155 { 00:35:32.155 "name": "BaseBdev4", 00:35:32.155 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:32.155 "is_configured": true, 00:35:32.155 "data_offset": 0, 00:35:32.155 "data_size": 65536 00:35:32.155 } 00:35:32.155 ] 00:35:32.155 }' 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:32.155 18:34:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.414 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:32.414 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:32.415 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:32.415 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:32.415 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:32.415 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:32.415 18:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.415 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:32.415 18:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:32.675 "name": "raid_bdev1", 00:35:32.675 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:32.675 "strip_size_kb": 64, 00:35:32.675 "state": "online", 00:35:32.675 "raid_level": "raid5f", 00:35:32.675 "superblock": false, 00:35:32.675 "num_base_bdevs": 4, 00:35:32.675 "num_base_bdevs_discovered": 3, 00:35:32.675 "num_base_bdevs_operational": 3, 00:35:32.675 "base_bdevs_list": [ 00:35:32.675 { 00:35:32.675 "name": null, 00:35:32.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:32.675 "is_configured": false, 00:35:32.675 "data_offset": 0, 00:35:32.675 "data_size": 65536 00:35:32.675 }, 00:35:32.675 { 00:35:32.675 "name": "BaseBdev2", 00:35:32.675 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:32.675 "is_configured": true, 00:35:32.675 "data_offset": 0, 00:35:32.675 "data_size": 65536 00:35:32.675 }, 00:35:32.675 { 00:35:32.675 "name": "BaseBdev3", 00:35:32.675 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:32.675 "is_configured": true, 00:35:32.675 "data_offset": 0, 00:35:32.675 "data_size": 65536 00:35:32.675 }, 00:35:32.675 { 00:35:32.675 "name": "BaseBdev4", 00:35:32.675 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:32.675 "is_configured": true, 00:35:32.675 "data_offset": 0, 00:35:32.675 "data_size": 65536 00:35:32.675 } 00:35:32.675 ] 00:35:32.675 }' 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:32.675 [2024-12-06 18:34:03.472541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:32.675 [2024-12-06 18:34:03.488126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:32.675 18:34:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:35:32.675 [2024-12-06 18:34:03.497944] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:33.614 "name": "raid_bdev1", 00:35:33.614 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:33.614 "strip_size_kb": 64, 00:35:33.614 "state": "online", 00:35:33.614 "raid_level": "raid5f", 00:35:33.614 "superblock": false, 00:35:33.614 "num_base_bdevs": 4, 00:35:33.614 "num_base_bdevs_discovered": 4, 00:35:33.614 "num_base_bdevs_operational": 4, 00:35:33.614 "process": { 00:35:33.614 "type": "rebuild", 00:35:33.614 "target": "spare", 00:35:33.614 "progress": { 00:35:33.614 "blocks": 19200, 00:35:33.614 "percent": 9 00:35:33.614 } 00:35:33.614 }, 00:35:33.614 "base_bdevs_list": [ 00:35:33.614 { 00:35:33.614 "name": "spare", 00:35:33.614 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:33.614 "is_configured": true, 00:35:33.614 "data_offset": 0, 00:35:33.614 "data_size": 65536 00:35:33.614 }, 00:35:33.614 { 00:35:33.614 "name": "BaseBdev2", 00:35:33.614 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:33.614 "is_configured": true, 00:35:33.614 "data_offset": 0, 00:35:33.614 "data_size": 65536 00:35:33.614 }, 00:35:33.614 { 00:35:33.614 "name": "BaseBdev3", 00:35:33.614 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:33.614 "is_configured": true, 00:35:33.614 "data_offset": 0, 00:35:33.614 "data_size": 65536 00:35:33.614 }, 00:35:33.614 { 00:35:33.614 "name": "BaseBdev4", 00:35:33.614 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:33.614 "is_configured": true, 00:35:33.614 "data_offset": 0, 00:35:33.614 "data_size": 65536 00:35:33.614 } 00:35:33.614 ] 00:35:33.614 }' 00:35:33.614 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=621 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:33.874 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:33.874 "name": "raid_bdev1", 00:35:33.874 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:33.874 "strip_size_kb": 64, 00:35:33.874 "state": "online", 00:35:33.874 "raid_level": "raid5f", 00:35:33.874 "superblock": false, 00:35:33.874 "num_base_bdevs": 4, 00:35:33.874 "num_base_bdevs_discovered": 4, 00:35:33.874 "num_base_bdevs_operational": 4, 00:35:33.874 "process": { 00:35:33.874 "type": "rebuild", 00:35:33.874 "target": "spare", 00:35:33.874 "progress": { 00:35:33.874 "blocks": 21120, 00:35:33.874 "percent": 10 00:35:33.874 } 00:35:33.874 }, 00:35:33.874 "base_bdevs_list": [ 00:35:33.874 { 00:35:33.874 "name": "spare", 00:35:33.874 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:33.874 "is_configured": true, 00:35:33.874 "data_offset": 0, 00:35:33.874 "data_size": 65536 00:35:33.875 }, 00:35:33.875 { 00:35:33.875 "name": "BaseBdev2", 00:35:33.875 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:33.875 "is_configured": true, 00:35:33.875 "data_offset": 0, 00:35:33.875 "data_size": 65536 00:35:33.875 }, 00:35:33.875 { 00:35:33.875 "name": "BaseBdev3", 00:35:33.875 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:33.875 "is_configured": true, 00:35:33.875 "data_offset": 0, 00:35:33.875 "data_size": 65536 00:35:33.875 }, 00:35:33.875 { 00:35:33.875 "name": "BaseBdev4", 00:35:33.875 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:33.875 "is_configured": true, 00:35:33.875 "data_offset": 0, 00:35:33.875 "data_size": 65536 00:35:33.875 } 00:35:33.875 ] 00:35:33.875 }' 00:35:33.875 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:33.875 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:33.875 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:33.875 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:33.875 18:34:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.251 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:35.251 "name": "raid_bdev1", 00:35:35.251 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:35.251 "strip_size_kb": 64, 00:35:35.251 "state": "online", 00:35:35.251 "raid_level": "raid5f", 00:35:35.251 "superblock": false, 00:35:35.251 "num_base_bdevs": 4, 00:35:35.251 "num_base_bdevs_discovered": 4, 00:35:35.251 "num_base_bdevs_operational": 4, 00:35:35.251 "process": { 00:35:35.251 "type": "rebuild", 00:35:35.251 "target": "spare", 00:35:35.251 "progress": { 00:35:35.251 "blocks": 42240, 00:35:35.251 "percent": 21 00:35:35.251 } 00:35:35.251 }, 00:35:35.251 "base_bdevs_list": [ 00:35:35.251 { 00:35:35.252 "name": "spare", 00:35:35.252 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:35.252 "is_configured": true, 00:35:35.252 "data_offset": 0, 00:35:35.252 "data_size": 65536 00:35:35.252 }, 00:35:35.252 { 00:35:35.252 "name": "BaseBdev2", 00:35:35.252 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:35.252 "is_configured": true, 00:35:35.252 "data_offset": 0, 00:35:35.252 "data_size": 65536 00:35:35.252 }, 00:35:35.252 { 00:35:35.252 "name": "BaseBdev3", 00:35:35.252 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:35.252 "is_configured": true, 00:35:35.252 "data_offset": 0, 00:35:35.252 "data_size": 65536 00:35:35.252 }, 00:35:35.252 { 00:35:35.252 "name": "BaseBdev4", 00:35:35.252 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:35.252 "is_configured": true, 00:35:35.252 "data_offset": 0, 00:35:35.252 "data_size": 65536 00:35:35.252 } 00:35:35.252 ] 00:35:35.252 }' 00:35:35.252 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:35.252 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:35.252 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:35.252 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:35.252 18:34:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.191 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:36.191 "name": "raid_bdev1", 00:35:36.191 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:36.191 "strip_size_kb": 64, 00:35:36.191 "state": "online", 00:35:36.191 "raid_level": "raid5f", 00:35:36.191 "superblock": false, 00:35:36.191 "num_base_bdevs": 4, 00:35:36.191 "num_base_bdevs_discovered": 4, 00:35:36.191 "num_base_bdevs_operational": 4, 00:35:36.191 "process": { 00:35:36.191 "type": "rebuild", 00:35:36.191 "target": "spare", 00:35:36.191 "progress": { 00:35:36.191 "blocks": 65280, 00:35:36.191 "percent": 33 00:35:36.191 } 00:35:36.191 }, 00:35:36.192 "base_bdevs_list": [ 00:35:36.192 { 00:35:36.192 "name": "spare", 00:35:36.192 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:36.192 "is_configured": true, 00:35:36.192 "data_offset": 0, 00:35:36.192 "data_size": 65536 00:35:36.192 }, 00:35:36.192 { 00:35:36.192 "name": "BaseBdev2", 00:35:36.192 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:36.192 "is_configured": true, 00:35:36.192 "data_offset": 0, 00:35:36.192 "data_size": 65536 00:35:36.192 }, 00:35:36.192 { 00:35:36.192 "name": "BaseBdev3", 00:35:36.192 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:36.192 "is_configured": true, 00:35:36.192 "data_offset": 0, 00:35:36.192 "data_size": 65536 00:35:36.192 }, 00:35:36.192 { 00:35:36.192 "name": "BaseBdev4", 00:35:36.192 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:36.192 "is_configured": true, 00:35:36.192 "data_offset": 0, 00:35:36.192 "data_size": 65536 00:35:36.192 } 00:35:36.192 ] 00:35:36.192 }' 00:35:36.192 18:34:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:36.192 18:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:36.192 18:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:36.192 18:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:36.192 18:34:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.130 18:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:37.388 18:34:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.388 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:37.388 "name": "raid_bdev1", 00:35:37.388 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:37.388 "strip_size_kb": 64, 00:35:37.388 "state": "online", 00:35:37.388 "raid_level": "raid5f", 00:35:37.388 "superblock": false, 00:35:37.388 "num_base_bdevs": 4, 00:35:37.388 "num_base_bdevs_discovered": 4, 00:35:37.388 "num_base_bdevs_operational": 4, 00:35:37.388 "process": { 00:35:37.388 "type": "rebuild", 00:35:37.388 "target": "spare", 00:35:37.388 "progress": { 00:35:37.388 "blocks": 86400, 00:35:37.388 "percent": 43 00:35:37.388 } 00:35:37.388 }, 00:35:37.388 "base_bdevs_list": [ 00:35:37.388 { 00:35:37.388 "name": "spare", 00:35:37.388 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:37.388 "is_configured": true, 00:35:37.388 "data_offset": 0, 00:35:37.388 "data_size": 65536 00:35:37.388 }, 00:35:37.388 { 00:35:37.388 "name": "BaseBdev2", 00:35:37.388 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:37.388 "is_configured": true, 00:35:37.388 "data_offset": 0, 00:35:37.388 "data_size": 65536 00:35:37.388 }, 00:35:37.388 { 00:35:37.388 "name": "BaseBdev3", 00:35:37.388 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:37.388 "is_configured": true, 00:35:37.388 "data_offset": 0, 00:35:37.388 "data_size": 65536 00:35:37.388 }, 00:35:37.388 { 00:35:37.388 "name": "BaseBdev4", 00:35:37.388 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:37.388 "is_configured": true, 00:35:37.388 "data_offset": 0, 00:35:37.388 "data_size": 65536 00:35:37.388 } 00:35:37.388 ] 00:35:37.388 }' 00:35:37.388 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:37.388 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:37.388 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:37.388 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:37.388 18:34:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.327 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:38.328 18:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:38.328 18:34:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.328 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:38.328 "name": "raid_bdev1", 00:35:38.328 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:38.328 "strip_size_kb": 64, 00:35:38.328 "state": "online", 00:35:38.328 "raid_level": "raid5f", 00:35:38.328 "superblock": false, 00:35:38.328 "num_base_bdevs": 4, 00:35:38.328 "num_base_bdevs_discovered": 4, 00:35:38.328 "num_base_bdevs_operational": 4, 00:35:38.328 "process": { 00:35:38.328 "type": "rebuild", 00:35:38.328 "target": "spare", 00:35:38.328 "progress": { 00:35:38.328 "blocks": 109440, 00:35:38.328 "percent": 55 00:35:38.328 } 00:35:38.328 }, 00:35:38.328 "base_bdevs_list": [ 00:35:38.328 { 00:35:38.328 "name": "spare", 00:35:38.328 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:38.328 "is_configured": true, 00:35:38.328 "data_offset": 0, 00:35:38.328 "data_size": 65536 00:35:38.328 }, 00:35:38.328 { 00:35:38.328 "name": "BaseBdev2", 00:35:38.328 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:38.328 "is_configured": true, 00:35:38.328 "data_offset": 0, 00:35:38.328 "data_size": 65536 00:35:38.328 }, 00:35:38.328 { 00:35:38.328 "name": "BaseBdev3", 00:35:38.328 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:38.328 "is_configured": true, 00:35:38.328 "data_offset": 0, 00:35:38.328 "data_size": 65536 00:35:38.328 }, 00:35:38.328 { 00:35:38.328 "name": "BaseBdev4", 00:35:38.328 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:38.328 "is_configured": true, 00:35:38.328 "data_offset": 0, 00:35:38.328 "data_size": 65536 00:35:38.328 } 00:35:38.328 ] 00:35:38.328 }' 00:35:38.587 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:38.588 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:38.588 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:38.588 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:38.588 18:34:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:39.528 "name": "raid_bdev1", 00:35:39.528 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:39.528 "strip_size_kb": 64, 00:35:39.528 "state": "online", 00:35:39.528 "raid_level": "raid5f", 00:35:39.528 "superblock": false, 00:35:39.528 "num_base_bdevs": 4, 00:35:39.528 "num_base_bdevs_discovered": 4, 00:35:39.528 "num_base_bdevs_operational": 4, 00:35:39.528 "process": { 00:35:39.528 "type": "rebuild", 00:35:39.528 "target": "spare", 00:35:39.528 "progress": { 00:35:39.528 "blocks": 130560, 00:35:39.528 "percent": 66 00:35:39.528 } 00:35:39.528 }, 00:35:39.528 "base_bdevs_list": [ 00:35:39.528 { 00:35:39.528 "name": "spare", 00:35:39.528 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:39.528 "is_configured": true, 00:35:39.528 "data_offset": 0, 00:35:39.528 "data_size": 65536 00:35:39.528 }, 00:35:39.528 { 00:35:39.528 "name": "BaseBdev2", 00:35:39.528 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:39.528 "is_configured": true, 00:35:39.528 "data_offset": 0, 00:35:39.528 "data_size": 65536 00:35:39.528 }, 00:35:39.528 { 00:35:39.528 "name": "BaseBdev3", 00:35:39.528 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:39.528 "is_configured": true, 00:35:39.528 "data_offset": 0, 00:35:39.528 "data_size": 65536 00:35:39.528 }, 00:35:39.528 { 00:35:39.528 "name": "BaseBdev4", 00:35:39.528 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:39.528 "is_configured": true, 00:35:39.528 "data_offset": 0, 00:35:39.528 "data_size": 65536 00:35:39.528 } 00:35:39.528 ] 00:35:39.528 }' 00:35:39.528 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:39.787 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:39.787 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:39.787 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:39.787 18:34:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.726 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:40.726 "name": "raid_bdev1", 00:35:40.726 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:40.726 "strip_size_kb": 64, 00:35:40.726 "state": "online", 00:35:40.726 "raid_level": "raid5f", 00:35:40.726 "superblock": false, 00:35:40.726 "num_base_bdevs": 4, 00:35:40.726 "num_base_bdevs_discovered": 4, 00:35:40.726 "num_base_bdevs_operational": 4, 00:35:40.726 "process": { 00:35:40.726 "type": "rebuild", 00:35:40.726 "target": "spare", 00:35:40.726 "progress": { 00:35:40.726 "blocks": 151680, 00:35:40.726 "percent": 77 00:35:40.726 } 00:35:40.726 }, 00:35:40.726 "base_bdevs_list": [ 00:35:40.726 { 00:35:40.726 "name": "spare", 00:35:40.726 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:40.726 "is_configured": true, 00:35:40.726 "data_offset": 0, 00:35:40.726 "data_size": 65536 00:35:40.726 }, 00:35:40.726 { 00:35:40.726 "name": "BaseBdev2", 00:35:40.726 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:40.726 "is_configured": true, 00:35:40.726 "data_offset": 0, 00:35:40.726 "data_size": 65536 00:35:40.726 }, 00:35:40.726 { 00:35:40.726 "name": "BaseBdev3", 00:35:40.726 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:40.726 "is_configured": true, 00:35:40.726 "data_offset": 0, 00:35:40.726 "data_size": 65536 00:35:40.726 }, 00:35:40.726 { 00:35:40.726 "name": "BaseBdev4", 00:35:40.726 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:40.727 "is_configured": true, 00:35:40.727 "data_offset": 0, 00:35:40.727 "data_size": 65536 00:35:40.727 } 00:35:40.727 ] 00:35:40.727 }' 00:35:40.727 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:40.727 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:40.727 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:40.985 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:40.985 18:34:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:41.922 "name": "raid_bdev1", 00:35:41.922 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:41.922 "strip_size_kb": 64, 00:35:41.922 "state": "online", 00:35:41.922 "raid_level": "raid5f", 00:35:41.922 "superblock": false, 00:35:41.922 "num_base_bdevs": 4, 00:35:41.922 "num_base_bdevs_discovered": 4, 00:35:41.922 "num_base_bdevs_operational": 4, 00:35:41.922 "process": { 00:35:41.922 "type": "rebuild", 00:35:41.922 "target": "spare", 00:35:41.922 "progress": { 00:35:41.922 "blocks": 174720, 00:35:41.922 "percent": 88 00:35:41.922 } 00:35:41.922 }, 00:35:41.922 "base_bdevs_list": [ 00:35:41.922 { 00:35:41.922 "name": "spare", 00:35:41.922 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:41.922 "is_configured": true, 00:35:41.922 "data_offset": 0, 00:35:41.922 "data_size": 65536 00:35:41.922 }, 00:35:41.922 { 00:35:41.922 "name": "BaseBdev2", 00:35:41.922 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:41.922 "is_configured": true, 00:35:41.922 "data_offset": 0, 00:35:41.922 "data_size": 65536 00:35:41.922 }, 00:35:41.922 { 00:35:41.922 "name": "BaseBdev3", 00:35:41.922 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:41.922 "is_configured": true, 00:35:41.922 "data_offset": 0, 00:35:41.922 "data_size": 65536 00:35:41.922 }, 00:35:41.922 { 00:35:41.922 "name": "BaseBdev4", 00:35:41.922 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:41.922 "is_configured": true, 00:35:41.922 "data_offset": 0, 00:35:41.922 "data_size": 65536 00:35:41.922 } 00:35:41.922 ] 00:35:41.922 }' 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:41.922 18:34:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.364 [2024-12-06 18:34:13.874116] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:35:43.364 [2024-12-06 18:34:13.874340] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:35:43.364 [2024-12-06 18:34:13.874484] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:43.364 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:43.364 "name": "raid_bdev1", 00:35:43.364 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:43.364 "strip_size_kb": 64, 00:35:43.364 "state": "online", 00:35:43.364 "raid_level": "raid5f", 00:35:43.364 "superblock": false, 00:35:43.364 "num_base_bdevs": 4, 00:35:43.364 "num_base_bdevs_discovered": 4, 00:35:43.364 "num_base_bdevs_operational": 4, 00:35:43.364 "process": { 00:35:43.364 "type": "rebuild", 00:35:43.364 "target": "spare", 00:35:43.364 "progress": { 00:35:43.364 "blocks": 195840, 00:35:43.364 "percent": 99 00:35:43.364 } 00:35:43.364 }, 00:35:43.364 "base_bdevs_list": [ 00:35:43.364 { 00:35:43.364 "name": "spare", 00:35:43.364 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:43.364 "is_configured": true, 00:35:43.364 "data_offset": 0, 00:35:43.364 "data_size": 65536 00:35:43.364 }, 00:35:43.364 { 00:35:43.364 "name": "BaseBdev2", 00:35:43.364 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:43.364 "is_configured": true, 00:35:43.364 "data_offset": 0, 00:35:43.364 "data_size": 65536 00:35:43.364 }, 00:35:43.364 { 00:35:43.364 "name": "BaseBdev3", 00:35:43.364 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:43.364 "is_configured": true, 00:35:43.364 "data_offset": 0, 00:35:43.364 "data_size": 65536 00:35:43.364 }, 00:35:43.364 { 00:35:43.364 "name": "BaseBdev4", 00:35:43.364 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:43.364 "is_configured": true, 00:35:43.365 "data_offset": 0, 00:35:43.365 "data_size": 65536 00:35:43.365 } 00:35:43.365 ] 00:35:43.365 }' 00:35:43.365 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:43.365 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:43.365 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:43.365 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:43.365 18:34:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.302 18:34:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:44.302 "name": "raid_bdev1", 00:35:44.302 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:44.302 "strip_size_kb": 64, 00:35:44.302 "state": "online", 00:35:44.302 "raid_level": "raid5f", 00:35:44.302 "superblock": false, 00:35:44.302 "num_base_bdevs": 4, 00:35:44.302 "num_base_bdevs_discovered": 4, 00:35:44.302 "num_base_bdevs_operational": 4, 00:35:44.302 "base_bdevs_list": [ 00:35:44.302 { 00:35:44.302 "name": "spare", 00:35:44.302 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:44.302 "is_configured": true, 00:35:44.302 "data_offset": 0, 00:35:44.302 "data_size": 65536 00:35:44.302 }, 00:35:44.302 { 00:35:44.302 "name": "BaseBdev2", 00:35:44.302 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:44.302 "is_configured": true, 00:35:44.302 "data_offset": 0, 00:35:44.302 "data_size": 65536 00:35:44.302 }, 00:35:44.302 { 00:35:44.302 "name": "BaseBdev3", 00:35:44.302 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:44.302 "is_configured": true, 00:35:44.302 "data_offset": 0, 00:35:44.302 "data_size": 65536 00:35:44.302 }, 00:35:44.302 { 00:35:44.302 "name": "BaseBdev4", 00:35:44.302 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:44.302 "is_configured": true, 00:35:44.302 "data_offset": 0, 00:35:44.302 "data_size": 65536 00:35:44.302 } 00:35:44.302 ] 00:35:44.302 }' 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.302 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:44.302 "name": "raid_bdev1", 00:35:44.302 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:44.302 "strip_size_kb": 64, 00:35:44.302 "state": "online", 00:35:44.302 "raid_level": "raid5f", 00:35:44.302 "superblock": false, 00:35:44.302 "num_base_bdevs": 4, 00:35:44.302 "num_base_bdevs_discovered": 4, 00:35:44.302 "num_base_bdevs_operational": 4, 00:35:44.302 "base_bdevs_list": [ 00:35:44.302 { 00:35:44.302 "name": "spare", 00:35:44.302 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:44.302 "is_configured": true, 00:35:44.302 "data_offset": 0, 00:35:44.302 "data_size": 65536 00:35:44.302 }, 00:35:44.302 { 00:35:44.303 "name": "BaseBdev2", 00:35:44.303 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:44.303 "is_configured": true, 00:35:44.303 "data_offset": 0, 00:35:44.303 "data_size": 65536 00:35:44.303 }, 00:35:44.303 { 00:35:44.303 "name": "BaseBdev3", 00:35:44.303 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:44.303 "is_configured": true, 00:35:44.303 "data_offset": 0, 00:35:44.303 "data_size": 65536 00:35:44.303 }, 00:35:44.303 { 00:35:44.303 "name": "BaseBdev4", 00:35:44.303 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:44.303 "is_configured": true, 00:35:44.303 "data_offset": 0, 00:35:44.303 "data_size": 65536 00:35:44.303 } 00:35:44.303 ] 00:35:44.303 }' 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.303 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.561 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.561 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:44.561 "name": "raid_bdev1", 00:35:44.561 "uuid": "83f940f6-2e34-4c26-8a94-8472df5bc235", 00:35:44.561 "strip_size_kb": 64, 00:35:44.561 "state": "online", 00:35:44.561 "raid_level": "raid5f", 00:35:44.561 "superblock": false, 00:35:44.561 "num_base_bdevs": 4, 00:35:44.561 "num_base_bdevs_discovered": 4, 00:35:44.561 "num_base_bdevs_operational": 4, 00:35:44.561 "base_bdevs_list": [ 00:35:44.561 { 00:35:44.561 "name": "spare", 00:35:44.561 "uuid": "afa2acd0-97b5-5a9b-b03b-b9208ce6ad80", 00:35:44.561 "is_configured": true, 00:35:44.561 "data_offset": 0, 00:35:44.561 "data_size": 65536 00:35:44.561 }, 00:35:44.561 { 00:35:44.561 "name": "BaseBdev2", 00:35:44.561 "uuid": "038a764a-487f-50db-ab8b-7dd75a77bcb2", 00:35:44.561 "is_configured": true, 00:35:44.561 "data_offset": 0, 00:35:44.561 "data_size": 65536 00:35:44.561 }, 00:35:44.561 { 00:35:44.561 "name": "BaseBdev3", 00:35:44.561 "uuid": "75a66f3c-9a4a-5d72-bf7f-319f13fb1a59", 00:35:44.561 "is_configured": true, 00:35:44.561 "data_offset": 0, 00:35:44.561 "data_size": 65536 00:35:44.561 }, 00:35:44.561 { 00:35:44.561 "name": "BaseBdev4", 00:35:44.561 "uuid": "b3be03d3-d2ea-5c9c-af06-5aec3135da41", 00:35:44.561 "is_configured": true, 00:35:44.561 "data_offset": 0, 00:35:44.561 "data_size": 65536 00:35:44.561 } 00:35:44.561 ] 00:35:44.561 }' 00:35:44.561 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:44.561 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.819 [2024-12-06 18:34:15.674944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:35:44.819 [2024-12-06 18:34:15.674987] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:35:44.819 [2024-12-06 18:34:15.675105] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:35:44.819 [2024-12-06 18:34:15.675241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:35:44.819 [2024-12-06 18:34:15.675256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:44.819 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:35:45.078 /dev/nbd0 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:45.078 1+0 records in 00:35:45.078 1+0 records out 00:35:45.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249156 s, 16.4 MB/s 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:35:45.078 18:34:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:45.078 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:45.078 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:35:45.078 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:45.078 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:45.078 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:35:45.337 /dev/nbd1 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:45.337 1+0 records in 00:35:45.337 1+0 records out 00:35:45.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446392 s, 9.2 MB/s 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:35:45.337 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:35:45.596 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:35:45.596 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:45.596 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:35:45.596 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:45.596 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:35:45.596 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:45.596 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:45.856 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:35:46.116 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:35:46.116 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84317 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84317 ']' 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84317 00:35:46.117 18:34:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:35:46.117 18:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.117 18:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84317 00:35:46.117 18:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:46.117 killing process with pid 84317 00:35:46.117 Received shutdown signal, test time was about 60.000000 seconds 00:35:46.117 00:35:46.117 Latency(us) 00:35:46.117 [2024-12-06T18:34:17.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.117 [2024-12-06T18:34:17.066Z] =================================================================================================================== 00:35:46.117 [2024-12-06T18:34:17.066Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:46.117 18:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:46.117 18:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84317' 00:35:46.117 18:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84317 00:35:46.117 [2024-12-06 18:34:17.047901] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:35:46.117 18:34:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84317 00:35:46.687 [2024-12-06 18:34:17.592494] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:35:48.065 00:35:48.065 real 0m20.454s 00:35:48.065 user 0m24.092s 00:35:48.065 sys 0m2.730s 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.065 ************************************ 00:35:48.065 END TEST raid5f_rebuild_test 00:35:48.065 ************************************ 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:35:48.065 18:34:18 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:35:48.065 18:34:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:35:48.065 18:34:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.065 18:34:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:35:48.065 ************************************ 00:35:48.065 START TEST raid5f_rebuild_test_sb 00:35:48.065 ************************************ 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84837 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84837 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84837 ']' 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.065 18:34:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:48.325 I/O size of 3145728 is greater than zero copy threshold (65536). 00:35:48.325 Zero copy mechanism will not be used. 00:35:48.325 [2024-12-06 18:34:19.041562] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:35:48.325 [2024-12-06 18:34:19.041711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84837 ] 00:35:48.325 [2024-12-06 18:34:19.224436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.584 [2024-12-06 18:34:19.369357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:48.860 [2024-12-06 18:34:19.616664] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:48.860 [2024-12-06 18:34:19.616753] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.119 BaseBdev1_malloc 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.119 [2024-12-06 18:34:19.948640] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:35:49.119 [2024-12-06 18:34:19.948841] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:49.119 [2024-12-06 18:34:19.948879] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:35:49.119 [2024-12-06 18:34:19.948896] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:49.119 [2024-12-06 18:34:19.951836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:49.119 [2024-12-06 18:34:19.951888] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:35:49.119 BaseBdev1 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.119 18:34:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.119 BaseBdev2_malloc 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.119 [2024-12-06 18:34:20.010648] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:35:49.119 [2024-12-06 18:34:20.010745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:49.119 [2024-12-06 18:34:20.010790] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:35:49.119 [2024-12-06 18:34:20.010812] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:49.119 BaseBdev2 00:35:49.119 [2024-12-06 18:34:20.015218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:49.119 [2024-12-06 18:34:20.015265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.119 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.378 BaseBdev3_malloc 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.378 [2024-12-06 18:34:20.088039] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:35:49.378 [2024-12-06 18:34:20.088234] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:49.378 [2024-12-06 18:34:20.088296] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:35:49.378 [2024-12-06 18:34:20.088385] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:49.378 [2024-12-06 18:34:20.091122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:49.378 [2024-12-06 18:34:20.091276] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:35:49.378 BaseBdev3 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.378 BaseBdev4_malloc 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.378 [2024-12-06 18:34:20.157782] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:35:49.378 [2024-12-06 18:34:20.157967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:49.378 [2024-12-06 18:34:20.158025] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:35:49.378 [2024-12-06 18:34:20.158141] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:49.378 [2024-12-06 18:34:20.160893] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:49.378 [2024-12-06 18:34:20.161038] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:35:49.378 BaseBdev4 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:35:49.378 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.379 spare_malloc 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.379 spare_delay 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.379 [2024-12-06 18:34:20.233294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:35:49.379 [2024-12-06 18:34:20.233468] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:35:49.379 [2024-12-06 18:34:20.233523] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:35:49.379 [2024-12-06 18:34:20.233600] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:35:49.379 [2024-12-06 18:34:20.236417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:35:49.379 [2024-12-06 18:34:20.236560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:35:49.379 spare 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.379 [2024-12-06 18:34:20.245391] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:35:49.379 [2024-12-06 18:34:20.247925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:35:49.379 [2024-12-06 18:34:20.248098] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:35:49.379 [2024-12-06 18:34:20.248209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:35:49.379 [2024-12-06 18:34:20.248508] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:35:49.379 [2024-12-06 18:34:20.248616] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:35:49.379 [2024-12-06 18:34:20.248933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:35:49.379 [2024-12-06 18:34:20.257251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:35:49.379 [2024-12-06 18:34:20.257365] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:35:49.379 [2024-12-06 18:34:20.257689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:49.379 "name": "raid_bdev1", 00:35:49.379 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:49.379 "strip_size_kb": 64, 00:35:49.379 "state": "online", 00:35:49.379 "raid_level": "raid5f", 00:35:49.379 "superblock": true, 00:35:49.379 "num_base_bdevs": 4, 00:35:49.379 "num_base_bdevs_discovered": 4, 00:35:49.379 "num_base_bdevs_operational": 4, 00:35:49.379 "base_bdevs_list": [ 00:35:49.379 { 00:35:49.379 "name": "BaseBdev1", 00:35:49.379 "uuid": "44ef7702-757b-5a5a-8187-5b0069222a70", 00:35:49.379 "is_configured": true, 00:35:49.379 "data_offset": 2048, 00:35:49.379 "data_size": 63488 00:35:49.379 }, 00:35:49.379 { 00:35:49.379 "name": "BaseBdev2", 00:35:49.379 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:49.379 "is_configured": true, 00:35:49.379 "data_offset": 2048, 00:35:49.379 "data_size": 63488 00:35:49.379 }, 00:35:49.379 { 00:35:49.379 "name": "BaseBdev3", 00:35:49.379 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:49.379 "is_configured": true, 00:35:49.379 "data_offset": 2048, 00:35:49.379 "data_size": 63488 00:35:49.379 }, 00:35:49.379 { 00:35:49.379 "name": "BaseBdev4", 00:35:49.379 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:49.379 "is_configured": true, 00:35:49.379 "data_offset": 2048, 00:35:49.379 "data_size": 63488 00:35:49.379 } 00:35:49.379 ] 00:35:49.379 }' 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:49.379 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.948 [2024-12-06 18:34:20.675040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:49.948 18:34:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:35:50.207 [2024-12-06 18:34:20.958872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:35:50.207 /dev/nbd0 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:35:50.207 1+0 records in 00:35:50.207 1+0 records out 00:35:50.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346314 s, 11.8 MB/s 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:35:50.207 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:35:50.774 496+0 records in 00:35:50.774 496+0 records out 00:35:50.774 97517568 bytes (98 MB, 93 MiB) copied, 0.521722 s, 187 MB/s 00:35:50.774 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:35:50.774 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:35:50.774 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:35:50.774 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:35:50.774 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:35:50.774 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:35:50.774 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:35:51.034 [2024-12-06 18:34:21.809201] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.034 [2024-12-06 18:34:21.835897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:51.034 "name": "raid_bdev1", 00:35:51.034 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:51.034 "strip_size_kb": 64, 00:35:51.034 "state": "online", 00:35:51.034 "raid_level": "raid5f", 00:35:51.034 "superblock": true, 00:35:51.034 "num_base_bdevs": 4, 00:35:51.034 "num_base_bdevs_discovered": 3, 00:35:51.034 "num_base_bdevs_operational": 3, 00:35:51.034 "base_bdevs_list": [ 00:35:51.034 { 00:35:51.034 "name": null, 00:35:51.034 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:51.034 "is_configured": false, 00:35:51.034 "data_offset": 0, 00:35:51.034 "data_size": 63488 00:35:51.034 }, 00:35:51.034 { 00:35:51.034 "name": "BaseBdev2", 00:35:51.034 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:51.034 "is_configured": true, 00:35:51.034 "data_offset": 2048, 00:35:51.034 "data_size": 63488 00:35:51.034 }, 00:35:51.034 { 00:35:51.034 "name": "BaseBdev3", 00:35:51.034 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:51.034 "is_configured": true, 00:35:51.034 "data_offset": 2048, 00:35:51.034 "data_size": 63488 00:35:51.034 }, 00:35:51.034 { 00:35:51.034 "name": "BaseBdev4", 00:35:51.034 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:51.034 "is_configured": true, 00:35:51.034 "data_offset": 2048, 00:35:51.034 "data_size": 63488 00:35:51.034 } 00:35:51.034 ] 00:35:51.034 }' 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:51.034 18:34:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.603 18:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:51.603 18:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.603 18:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:51.603 [2024-12-06 18:34:22.287361] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:51.603 [2024-12-06 18:34:22.304653] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:35:51.603 18:34:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.603 18:34:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:35:51.603 [2024-12-06 18:34:22.315561] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:52.540 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:52.541 "name": "raid_bdev1", 00:35:52.541 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:52.541 "strip_size_kb": 64, 00:35:52.541 "state": "online", 00:35:52.541 "raid_level": "raid5f", 00:35:52.541 "superblock": true, 00:35:52.541 "num_base_bdevs": 4, 00:35:52.541 "num_base_bdevs_discovered": 4, 00:35:52.541 "num_base_bdevs_operational": 4, 00:35:52.541 "process": { 00:35:52.541 "type": "rebuild", 00:35:52.541 "target": "spare", 00:35:52.541 "progress": { 00:35:52.541 "blocks": 19200, 00:35:52.541 "percent": 10 00:35:52.541 } 00:35:52.541 }, 00:35:52.541 "base_bdevs_list": [ 00:35:52.541 { 00:35:52.541 "name": "spare", 00:35:52.541 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:35:52.541 "is_configured": true, 00:35:52.541 "data_offset": 2048, 00:35:52.541 "data_size": 63488 00:35:52.541 }, 00:35:52.541 { 00:35:52.541 "name": "BaseBdev2", 00:35:52.541 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:52.541 "is_configured": true, 00:35:52.541 "data_offset": 2048, 00:35:52.541 "data_size": 63488 00:35:52.541 }, 00:35:52.541 { 00:35:52.541 "name": "BaseBdev3", 00:35:52.541 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:52.541 "is_configured": true, 00:35:52.541 "data_offset": 2048, 00:35:52.541 "data_size": 63488 00:35:52.541 }, 00:35:52.541 { 00:35:52.541 "name": "BaseBdev4", 00:35:52.541 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:52.541 "is_configured": true, 00:35:52.541 "data_offset": 2048, 00:35:52.541 "data_size": 63488 00:35:52.541 } 00:35:52.541 ] 00:35:52.541 }' 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.541 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.541 [2024-12-06 18:34:23.455702] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:52.800 [2024-12-06 18:34:23.524732] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:35:52.800 [2024-12-06 18:34:23.524820] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:35:52.800 [2024-12-06 18:34:23.524841] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:35:52.800 [2024-12-06 18:34:23.524855] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:35:52.800 "name": "raid_bdev1", 00:35:52.800 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:52.800 "strip_size_kb": 64, 00:35:52.800 "state": "online", 00:35:52.800 "raid_level": "raid5f", 00:35:52.800 "superblock": true, 00:35:52.800 "num_base_bdevs": 4, 00:35:52.800 "num_base_bdevs_discovered": 3, 00:35:52.800 "num_base_bdevs_operational": 3, 00:35:52.800 "base_bdevs_list": [ 00:35:52.800 { 00:35:52.800 "name": null, 00:35:52.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:52.800 "is_configured": false, 00:35:52.800 "data_offset": 0, 00:35:52.800 "data_size": 63488 00:35:52.800 }, 00:35:52.800 { 00:35:52.800 "name": "BaseBdev2", 00:35:52.800 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:52.800 "is_configured": true, 00:35:52.800 "data_offset": 2048, 00:35:52.800 "data_size": 63488 00:35:52.800 }, 00:35:52.800 { 00:35:52.800 "name": "BaseBdev3", 00:35:52.800 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:52.800 "is_configured": true, 00:35:52.800 "data_offset": 2048, 00:35:52.800 "data_size": 63488 00:35:52.800 }, 00:35:52.800 { 00:35:52.800 "name": "BaseBdev4", 00:35:52.800 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:52.800 "is_configured": true, 00:35:52.800 "data_offset": 2048, 00:35:52.800 "data_size": 63488 00:35:52.800 } 00:35:52.800 ] 00:35:52.800 }' 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:35:52.800 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:53.060 18:34:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.060 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:53.320 "name": "raid_bdev1", 00:35:53.320 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:53.320 "strip_size_kb": 64, 00:35:53.320 "state": "online", 00:35:53.320 "raid_level": "raid5f", 00:35:53.320 "superblock": true, 00:35:53.320 "num_base_bdevs": 4, 00:35:53.320 "num_base_bdevs_discovered": 3, 00:35:53.320 "num_base_bdevs_operational": 3, 00:35:53.320 "base_bdevs_list": [ 00:35:53.320 { 00:35:53.320 "name": null, 00:35:53.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:35:53.320 "is_configured": false, 00:35:53.320 "data_offset": 0, 00:35:53.320 "data_size": 63488 00:35:53.320 }, 00:35:53.320 { 00:35:53.320 "name": "BaseBdev2", 00:35:53.320 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:53.320 "is_configured": true, 00:35:53.320 "data_offset": 2048, 00:35:53.320 "data_size": 63488 00:35:53.320 }, 00:35:53.320 { 00:35:53.320 "name": "BaseBdev3", 00:35:53.320 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:53.320 "is_configured": true, 00:35:53.320 "data_offset": 2048, 00:35:53.320 "data_size": 63488 00:35:53.320 }, 00:35:53.320 { 00:35:53.320 "name": "BaseBdev4", 00:35:53.320 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:53.320 "is_configured": true, 00:35:53.320 "data_offset": 2048, 00:35:53.320 "data_size": 63488 00:35:53.320 } 00:35:53.320 ] 00:35:53.320 }' 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:53.320 [2024-12-06 18:34:24.127531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:35:53.320 [2024-12-06 18:34:24.143741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.320 18:34:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:35:53.320 [2024-12-06 18:34:24.154052] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.259 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:54.259 "name": "raid_bdev1", 00:35:54.259 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:54.259 "strip_size_kb": 64, 00:35:54.259 "state": "online", 00:35:54.259 "raid_level": "raid5f", 00:35:54.259 "superblock": true, 00:35:54.259 "num_base_bdevs": 4, 00:35:54.259 "num_base_bdevs_discovered": 4, 00:35:54.259 "num_base_bdevs_operational": 4, 00:35:54.259 "process": { 00:35:54.259 "type": "rebuild", 00:35:54.259 "target": "spare", 00:35:54.259 "progress": { 00:35:54.259 "blocks": 19200, 00:35:54.259 "percent": 10 00:35:54.259 } 00:35:54.259 }, 00:35:54.259 "base_bdevs_list": [ 00:35:54.259 { 00:35:54.259 "name": "spare", 00:35:54.259 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:35:54.259 "is_configured": true, 00:35:54.259 "data_offset": 2048, 00:35:54.259 "data_size": 63488 00:35:54.259 }, 00:35:54.259 { 00:35:54.259 "name": "BaseBdev2", 00:35:54.259 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:54.259 "is_configured": true, 00:35:54.259 "data_offset": 2048, 00:35:54.259 "data_size": 63488 00:35:54.259 }, 00:35:54.259 { 00:35:54.259 "name": "BaseBdev3", 00:35:54.259 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:54.259 "is_configured": true, 00:35:54.259 "data_offset": 2048, 00:35:54.259 "data_size": 63488 00:35:54.259 }, 00:35:54.259 { 00:35:54.259 "name": "BaseBdev4", 00:35:54.259 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:54.259 "is_configured": true, 00:35:54.259 "data_offset": 2048, 00:35:54.259 "data_size": 63488 00:35:54.259 } 00:35:54.259 ] 00:35:54.260 }' 00:35:54.260 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:35:54.518 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=642 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:54.518 "name": "raid_bdev1", 00:35:54.518 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:54.518 "strip_size_kb": 64, 00:35:54.518 "state": "online", 00:35:54.518 "raid_level": "raid5f", 00:35:54.518 "superblock": true, 00:35:54.518 "num_base_bdevs": 4, 00:35:54.518 "num_base_bdevs_discovered": 4, 00:35:54.518 "num_base_bdevs_operational": 4, 00:35:54.518 "process": { 00:35:54.518 "type": "rebuild", 00:35:54.518 "target": "spare", 00:35:54.518 "progress": { 00:35:54.518 "blocks": 21120, 00:35:54.518 "percent": 11 00:35:54.518 } 00:35:54.518 }, 00:35:54.518 "base_bdevs_list": [ 00:35:54.518 { 00:35:54.518 "name": "spare", 00:35:54.518 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:35:54.518 "is_configured": true, 00:35:54.518 "data_offset": 2048, 00:35:54.518 "data_size": 63488 00:35:54.518 }, 00:35:54.518 { 00:35:54.518 "name": "BaseBdev2", 00:35:54.518 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:54.518 "is_configured": true, 00:35:54.518 "data_offset": 2048, 00:35:54.518 "data_size": 63488 00:35:54.518 }, 00:35:54.518 { 00:35:54.518 "name": "BaseBdev3", 00:35:54.518 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:54.518 "is_configured": true, 00:35:54.518 "data_offset": 2048, 00:35:54.518 "data_size": 63488 00:35:54.518 }, 00:35:54.518 { 00:35:54.518 "name": "BaseBdev4", 00:35:54.518 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:54.518 "is_configured": true, 00:35:54.518 "data_offset": 2048, 00:35:54.518 "data_size": 63488 00:35:54.518 } 00:35:54.518 ] 00:35:54.518 }' 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:54.518 18:34:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:55.894 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:55.894 "name": "raid_bdev1", 00:35:55.894 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:55.894 "strip_size_kb": 64, 00:35:55.894 "state": "online", 00:35:55.894 "raid_level": "raid5f", 00:35:55.894 "superblock": true, 00:35:55.894 "num_base_bdevs": 4, 00:35:55.894 "num_base_bdevs_discovered": 4, 00:35:55.894 "num_base_bdevs_operational": 4, 00:35:55.894 "process": { 00:35:55.894 "type": "rebuild", 00:35:55.894 "target": "spare", 00:35:55.894 "progress": { 00:35:55.894 "blocks": 42240, 00:35:55.894 "percent": 22 00:35:55.894 } 00:35:55.894 }, 00:35:55.894 "base_bdevs_list": [ 00:35:55.894 { 00:35:55.894 "name": "spare", 00:35:55.894 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:35:55.894 "is_configured": true, 00:35:55.894 "data_offset": 2048, 00:35:55.894 "data_size": 63488 00:35:55.894 }, 00:35:55.894 { 00:35:55.894 "name": "BaseBdev2", 00:35:55.894 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:55.894 "is_configured": true, 00:35:55.894 "data_offset": 2048, 00:35:55.894 "data_size": 63488 00:35:55.894 }, 00:35:55.894 { 00:35:55.894 "name": "BaseBdev3", 00:35:55.894 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:55.894 "is_configured": true, 00:35:55.894 "data_offset": 2048, 00:35:55.894 "data_size": 63488 00:35:55.894 }, 00:35:55.894 { 00:35:55.894 "name": "BaseBdev4", 00:35:55.894 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:55.895 "is_configured": true, 00:35:55.895 "data_offset": 2048, 00:35:55.895 "data_size": 63488 00:35:55.895 } 00:35:55.895 ] 00:35:55.895 }' 00:35:55.895 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:55.895 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:55.895 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:55.895 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:55.895 18:34:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:56.832 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:56.832 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:56.833 "name": "raid_bdev1", 00:35:56.833 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:56.833 "strip_size_kb": 64, 00:35:56.833 "state": "online", 00:35:56.833 "raid_level": "raid5f", 00:35:56.833 "superblock": true, 00:35:56.833 "num_base_bdevs": 4, 00:35:56.833 "num_base_bdevs_discovered": 4, 00:35:56.833 "num_base_bdevs_operational": 4, 00:35:56.833 "process": { 00:35:56.833 "type": "rebuild", 00:35:56.833 "target": "spare", 00:35:56.833 "progress": { 00:35:56.833 "blocks": 63360, 00:35:56.833 "percent": 33 00:35:56.833 } 00:35:56.833 }, 00:35:56.833 "base_bdevs_list": [ 00:35:56.833 { 00:35:56.833 "name": "spare", 00:35:56.833 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:35:56.833 "is_configured": true, 00:35:56.833 "data_offset": 2048, 00:35:56.833 "data_size": 63488 00:35:56.833 }, 00:35:56.833 { 00:35:56.833 "name": "BaseBdev2", 00:35:56.833 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:56.833 "is_configured": true, 00:35:56.833 "data_offset": 2048, 00:35:56.833 "data_size": 63488 00:35:56.833 }, 00:35:56.833 { 00:35:56.833 "name": "BaseBdev3", 00:35:56.833 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:56.833 "is_configured": true, 00:35:56.833 "data_offset": 2048, 00:35:56.833 "data_size": 63488 00:35:56.833 }, 00:35:56.833 { 00:35:56.833 "name": "BaseBdev4", 00:35:56.833 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:56.833 "is_configured": true, 00:35:56.833 "data_offset": 2048, 00:35:56.833 "data_size": 63488 00:35:56.833 } 00:35:56.833 ] 00:35:56.833 }' 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:56.833 18:34:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:57.768 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.769 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:58.028 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.028 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:58.028 "name": "raid_bdev1", 00:35:58.028 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:58.028 "strip_size_kb": 64, 00:35:58.028 "state": "online", 00:35:58.028 "raid_level": "raid5f", 00:35:58.028 "superblock": true, 00:35:58.028 "num_base_bdevs": 4, 00:35:58.028 "num_base_bdevs_discovered": 4, 00:35:58.028 "num_base_bdevs_operational": 4, 00:35:58.028 "process": { 00:35:58.028 "type": "rebuild", 00:35:58.028 "target": "spare", 00:35:58.028 "progress": { 00:35:58.028 "blocks": 86400, 00:35:58.028 "percent": 45 00:35:58.028 } 00:35:58.028 }, 00:35:58.028 "base_bdevs_list": [ 00:35:58.028 { 00:35:58.028 "name": "spare", 00:35:58.028 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:35:58.028 "is_configured": true, 00:35:58.028 "data_offset": 2048, 00:35:58.028 "data_size": 63488 00:35:58.028 }, 00:35:58.028 { 00:35:58.028 "name": "BaseBdev2", 00:35:58.028 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:58.028 "is_configured": true, 00:35:58.028 "data_offset": 2048, 00:35:58.028 "data_size": 63488 00:35:58.028 }, 00:35:58.028 { 00:35:58.028 "name": "BaseBdev3", 00:35:58.028 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:58.028 "is_configured": true, 00:35:58.028 "data_offset": 2048, 00:35:58.028 "data_size": 63488 00:35:58.028 }, 00:35:58.028 { 00:35:58.028 "name": "BaseBdev4", 00:35:58.028 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:58.028 "is_configured": true, 00:35:58.028 "data_offset": 2048, 00:35:58.028 "data_size": 63488 00:35:58.028 } 00:35:58.028 ] 00:35:58.028 }' 00:35:58.028 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:58.028 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:58.028 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:58.028 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:58.028 18:34:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.965 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:35:58.965 "name": "raid_bdev1", 00:35:58.965 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:35:58.965 "strip_size_kb": 64, 00:35:58.965 "state": "online", 00:35:58.965 "raid_level": "raid5f", 00:35:58.965 "superblock": true, 00:35:58.965 "num_base_bdevs": 4, 00:35:58.965 "num_base_bdevs_discovered": 4, 00:35:58.966 "num_base_bdevs_operational": 4, 00:35:58.966 "process": { 00:35:58.966 "type": "rebuild", 00:35:58.966 "target": "spare", 00:35:58.966 "progress": { 00:35:58.966 "blocks": 107520, 00:35:58.966 "percent": 56 00:35:58.966 } 00:35:58.966 }, 00:35:58.966 "base_bdevs_list": [ 00:35:58.966 { 00:35:58.966 "name": "spare", 00:35:58.966 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:35:58.966 "is_configured": true, 00:35:58.966 "data_offset": 2048, 00:35:58.966 "data_size": 63488 00:35:58.966 }, 00:35:58.966 { 00:35:58.966 "name": "BaseBdev2", 00:35:58.966 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:35:58.966 "is_configured": true, 00:35:58.966 "data_offset": 2048, 00:35:58.966 "data_size": 63488 00:35:58.966 }, 00:35:58.966 { 00:35:58.966 "name": "BaseBdev3", 00:35:58.966 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:35:58.966 "is_configured": true, 00:35:58.966 "data_offset": 2048, 00:35:58.966 "data_size": 63488 00:35:58.966 }, 00:35:58.966 { 00:35:58.966 "name": "BaseBdev4", 00:35:58.966 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:35:58.966 "is_configured": true, 00:35:58.966 "data_offset": 2048, 00:35:58.966 "data_size": 63488 00:35:58.966 } 00:35:58.966 ] 00:35:58.966 }' 00:35:58.966 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:35:59.225 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:35:59.225 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:35:59.225 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:35:59.225 18:34:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:00.163 18:34:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:00.163 18:34:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.163 18:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:00.163 "name": "raid_bdev1", 00:36:00.163 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:00.163 "strip_size_kb": 64, 00:36:00.163 "state": "online", 00:36:00.163 "raid_level": "raid5f", 00:36:00.163 "superblock": true, 00:36:00.163 "num_base_bdevs": 4, 00:36:00.163 "num_base_bdevs_discovered": 4, 00:36:00.163 "num_base_bdevs_operational": 4, 00:36:00.163 "process": { 00:36:00.163 "type": "rebuild", 00:36:00.163 "target": "spare", 00:36:00.163 "progress": { 00:36:00.163 "blocks": 130560, 00:36:00.163 "percent": 68 00:36:00.163 } 00:36:00.163 }, 00:36:00.163 "base_bdevs_list": [ 00:36:00.163 { 00:36:00.163 "name": "spare", 00:36:00.163 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:00.163 "is_configured": true, 00:36:00.163 "data_offset": 2048, 00:36:00.163 "data_size": 63488 00:36:00.163 }, 00:36:00.163 { 00:36:00.163 "name": "BaseBdev2", 00:36:00.163 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:00.163 "is_configured": true, 00:36:00.163 "data_offset": 2048, 00:36:00.163 "data_size": 63488 00:36:00.163 }, 00:36:00.163 { 00:36:00.163 "name": "BaseBdev3", 00:36:00.163 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:00.163 "is_configured": true, 00:36:00.163 "data_offset": 2048, 00:36:00.163 "data_size": 63488 00:36:00.163 }, 00:36:00.163 { 00:36:00.163 "name": "BaseBdev4", 00:36:00.163 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:00.163 "is_configured": true, 00:36:00.163 "data_offset": 2048, 00:36:00.163 "data_size": 63488 00:36:00.163 } 00:36:00.163 ] 00:36:00.163 }' 00:36:00.163 18:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:00.163 18:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:00.163 18:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:00.423 18:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:00.423 18:34:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:01.409 "name": "raid_bdev1", 00:36:01.409 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:01.409 "strip_size_kb": 64, 00:36:01.409 "state": "online", 00:36:01.409 "raid_level": "raid5f", 00:36:01.409 "superblock": true, 00:36:01.409 "num_base_bdevs": 4, 00:36:01.409 "num_base_bdevs_discovered": 4, 00:36:01.409 "num_base_bdevs_operational": 4, 00:36:01.409 "process": { 00:36:01.409 "type": "rebuild", 00:36:01.409 "target": "spare", 00:36:01.409 "progress": { 00:36:01.409 "blocks": 151680, 00:36:01.409 "percent": 79 00:36:01.409 } 00:36:01.409 }, 00:36:01.409 "base_bdevs_list": [ 00:36:01.409 { 00:36:01.409 "name": "spare", 00:36:01.409 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:01.409 "is_configured": true, 00:36:01.409 "data_offset": 2048, 00:36:01.409 "data_size": 63488 00:36:01.409 }, 00:36:01.409 { 00:36:01.409 "name": "BaseBdev2", 00:36:01.409 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:01.409 "is_configured": true, 00:36:01.409 "data_offset": 2048, 00:36:01.409 "data_size": 63488 00:36:01.409 }, 00:36:01.409 { 00:36:01.409 "name": "BaseBdev3", 00:36:01.409 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:01.409 "is_configured": true, 00:36:01.409 "data_offset": 2048, 00:36:01.409 "data_size": 63488 00:36:01.409 }, 00:36:01.409 { 00:36:01.409 "name": "BaseBdev4", 00:36:01.409 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:01.409 "is_configured": true, 00:36:01.409 "data_offset": 2048, 00:36:01.409 "data_size": 63488 00:36:01.409 } 00:36:01.409 ] 00:36:01.409 }' 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:01.409 18:34:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:02.382 "name": "raid_bdev1", 00:36:02.382 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:02.382 "strip_size_kb": 64, 00:36:02.382 "state": "online", 00:36:02.382 "raid_level": "raid5f", 00:36:02.382 "superblock": true, 00:36:02.382 "num_base_bdevs": 4, 00:36:02.382 "num_base_bdevs_discovered": 4, 00:36:02.382 "num_base_bdevs_operational": 4, 00:36:02.382 "process": { 00:36:02.382 "type": "rebuild", 00:36:02.382 "target": "spare", 00:36:02.382 "progress": { 00:36:02.382 "blocks": 172800, 00:36:02.382 "percent": 90 00:36:02.382 } 00:36:02.382 }, 00:36:02.382 "base_bdevs_list": [ 00:36:02.382 { 00:36:02.382 "name": "spare", 00:36:02.382 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:02.382 "is_configured": true, 00:36:02.382 "data_offset": 2048, 00:36:02.382 "data_size": 63488 00:36:02.382 }, 00:36:02.382 { 00:36:02.382 "name": "BaseBdev2", 00:36:02.382 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:02.382 "is_configured": true, 00:36:02.382 "data_offset": 2048, 00:36:02.382 "data_size": 63488 00:36:02.382 }, 00:36:02.382 { 00:36:02.382 "name": "BaseBdev3", 00:36:02.382 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:02.382 "is_configured": true, 00:36:02.382 "data_offset": 2048, 00:36:02.382 "data_size": 63488 00:36:02.382 }, 00:36:02.382 { 00:36:02.382 "name": "BaseBdev4", 00:36:02.382 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:02.382 "is_configured": true, 00:36:02.382 "data_offset": 2048, 00:36:02.382 "data_size": 63488 00:36:02.382 } 00:36:02.382 ] 00:36:02.382 }' 00:36:02.382 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:02.641 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:02.642 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:02.642 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:02.642 18:34:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:03.578 [2024-12-06 18:34:34.214182] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:03.578 [2024-12-06 18:34:34.214261] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:03.578 [2024-12-06 18:34:34.214425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:03.578 "name": "raid_bdev1", 00:36:03.578 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:03.578 "strip_size_kb": 64, 00:36:03.578 "state": "online", 00:36:03.578 "raid_level": "raid5f", 00:36:03.578 "superblock": true, 00:36:03.578 "num_base_bdevs": 4, 00:36:03.578 "num_base_bdevs_discovered": 4, 00:36:03.578 "num_base_bdevs_operational": 4, 00:36:03.578 "base_bdevs_list": [ 00:36:03.578 { 00:36:03.578 "name": "spare", 00:36:03.578 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:03.578 "is_configured": true, 00:36:03.578 "data_offset": 2048, 00:36:03.578 "data_size": 63488 00:36:03.578 }, 00:36:03.578 { 00:36:03.578 "name": "BaseBdev2", 00:36:03.578 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:03.578 "is_configured": true, 00:36:03.578 "data_offset": 2048, 00:36:03.578 "data_size": 63488 00:36:03.578 }, 00:36:03.578 { 00:36:03.578 "name": "BaseBdev3", 00:36:03.578 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:03.578 "is_configured": true, 00:36:03.578 "data_offset": 2048, 00:36:03.578 "data_size": 63488 00:36:03.578 }, 00:36:03.578 { 00:36:03.578 "name": "BaseBdev4", 00:36:03.578 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:03.578 "is_configured": true, 00:36:03.578 "data_offset": 2048, 00:36:03.578 "data_size": 63488 00:36:03.578 } 00:36:03.578 ] 00:36:03.578 }' 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:03.578 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.837 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:03.837 "name": "raid_bdev1", 00:36:03.837 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:03.837 "strip_size_kb": 64, 00:36:03.837 "state": "online", 00:36:03.837 "raid_level": "raid5f", 00:36:03.837 "superblock": true, 00:36:03.837 "num_base_bdevs": 4, 00:36:03.837 "num_base_bdevs_discovered": 4, 00:36:03.837 "num_base_bdevs_operational": 4, 00:36:03.837 "base_bdevs_list": [ 00:36:03.837 { 00:36:03.837 "name": "spare", 00:36:03.837 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:03.837 "is_configured": true, 00:36:03.837 "data_offset": 2048, 00:36:03.837 "data_size": 63488 00:36:03.837 }, 00:36:03.837 { 00:36:03.837 "name": "BaseBdev2", 00:36:03.837 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:03.837 "is_configured": true, 00:36:03.837 "data_offset": 2048, 00:36:03.837 "data_size": 63488 00:36:03.837 }, 00:36:03.837 { 00:36:03.837 "name": "BaseBdev3", 00:36:03.837 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:03.837 "is_configured": true, 00:36:03.837 "data_offset": 2048, 00:36:03.837 "data_size": 63488 00:36:03.837 }, 00:36:03.837 { 00:36:03.837 "name": "BaseBdev4", 00:36:03.837 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:03.837 "is_configured": true, 00:36:03.837 "data_offset": 2048, 00:36:03.837 "data_size": 63488 00:36:03.837 } 00:36:03.838 ] 00:36:03.838 }' 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:03.838 "name": "raid_bdev1", 00:36:03.838 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:03.838 "strip_size_kb": 64, 00:36:03.838 "state": "online", 00:36:03.838 "raid_level": "raid5f", 00:36:03.838 "superblock": true, 00:36:03.838 "num_base_bdevs": 4, 00:36:03.838 "num_base_bdevs_discovered": 4, 00:36:03.838 "num_base_bdevs_operational": 4, 00:36:03.838 "base_bdevs_list": [ 00:36:03.838 { 00:36:03.838 "name": "spare", 00:36:03.838 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:03.838 "is_configured": true, 00:36:03.838 "data_offset": 2048, 00:36:03.838 "data_size": 63488 00:36:03.838 }, 00:36:03.838 { 00:36:03.838 "name": "BaseBdev2", 00:36:03.838 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:03.838 "is_configured": true, 00:36:03.838 "data_offset": 2048, 00:36:03.838 "data_size": 63488 00:36:03.838 }, 00:36:03.838 { 00:36:03.838 "name": "BaseBdev3", 00:36:03.838 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:03.838 "is_configured": true, 00:36:03.838 "data_offset": 2048, 00:36:03.838 "data_size": 63488 00:36:03.838 }, 00:36:03.838 { 00:36:03.838 "name": "BaseBdev4", 00:36:03.838 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:03.838 "is_configured": true, 00:36:03.838 "data_offset": 2048, 00:36:03.838 "data_size": 63488 00:36:03.838 } 00:36:03.838 ] 00:36:03.838 }' 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:03.838 18:34:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.096 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:04.096 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.096 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.353 [2024-12-06 18:34:35.046744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:04.353 [2024-12-06 18:34:35.046781] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:04.353 [2024-12-06 18:34:35.046877] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:04.353 [2024-12-06 18:34:35.046986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:04.353 [2024-12-06 18:34:35.047013] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:04.353 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:04.354 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:04.354 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:04.354 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:36:04.354 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:04.354 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:04.354 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:04.611 /dev/nbd0 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:04.611 1+0 records in 00:36:04.611 1+0 records out 00:36:04.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357308 s, 11.5 MB/s 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:04.611 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:36:04.870 /dev/nbd1 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:04.870 1+0 records in 00:36:04.870 1+0 records out 00:36:04.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345814 s, 11.8 MB/s 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:04.870 18:34:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:05.128 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.386 [2024-12-06 18:34:36.271780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:05.386 [2024-12-06 18:34:36.271863] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:05.386 [2024-12-06 18:34:36.271897] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:36:05.386 [2024-12-06 18:34:36.271910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:05.386 [2024-12-06 18:34:36.274895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:05.386 [2024-12-06 18:34:36.274937] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:05.386 [2024-12-06 18:34:36.275075] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:05.386 [2024-12-06 18:34:36.275172] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:05.386 [2024-12-06 18:34:36.275356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:05.386 [2024-12-06 18:34:36.275468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:36:05.386 [2024-12-06 18:34:36.275561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:36:05.386 spare 00:36:05.386 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.387 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:36:05.387 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.387 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.645 [2024-12-06 18:34:36.375498] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:36:05.645 [2024-12-06 18:34:36.375530] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:36:05.645 [2024-12-06 18:34:36.375845] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:36:05.645 [2024-12-06 18:34:36.382758] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:36:05.645 [2024-12-06 18:34:36.382780] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:36:05.645 [2024-12-06 18:34:36.382992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.645 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:05.645 "name": "raid_bdev1", 00:36:05.645 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:05.645 "strip_size_kb": 64, 00:36:05.645 "state": "online", 00:36:05.645 "raid_level": "raid5f", 00:36:05.645 "superblock": true, 00:36:05.645 "num_base_bdevs": 4, 00:36:05.645 "num_base_bdevs_discovered": 4, 00:36:05.645 "num_base_bdevs_operational": 4, 00:36:05.645 "base_bdevs_list": [ 00:36:05.646 { 00:36:05.646 "name": "spare", 00:36:05.646 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:05.646 "is_configured": true, 00:36:05.646 "data_offset": 2048, 00:36:05.646 "data_size": 63488 00:36:05.646 }, 00:36:05.646 { 00:36:05.646 "name": "BaseBdev2", 00:36:05.646 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:05.646 "is_configured": true, 00:36:05.646 "data_offset": 2048, 00:36:05.646 "data_size": 63488 00:36:05.646 }, 00:36:05.646 { 00:36:05.646 "name": "BaseBdev3", 00:36:05.646 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:05.646 "is_configured": true, 00:36:05.646 "data_offset": 2048, 00:36:05.646 "data_size": 63488 00:36:05.646 }, 00:36:05.646 { 00:36:05.646 "name": "BaseBdev4", 00:36:05.646 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:05.646 "is_configured": true, 00:36:05.646 "data_offset": 2048, 00:36:05.646 "data_size": 63488 00:36:05.646 } 00:36:05.646 ] 00:36:05.646 }' 00:36:05.646 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:05.646 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.904 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:05.904 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:05.904 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:05.904 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:05.904 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:05.904 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:05.905 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.905 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:05.905 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:05.905 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:06.164 "name": "raid_bdev1", 00:36:06.164 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:06.164 "strip_size_kb": 64, 00:36:06.164 "state": "online", 00:36:06.164 "raid_level": "raid5f", 00:36:06.164 "superblock": true, 00:36:06.164 "num_base_bdevs": 4, 00:36:06.164 "num_base_bdevs_discovered": 4, 00:36:06.164 "num_base_bdevs_operational": 4, 00:36:06.164 "base_bdevs_list": [ 00:36:06.164 { 00:36:06.164 "name": "spare", 00:36:06.164 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:06.164 "is_configured": true, 00:36:06.164 "data_offset": 2048, 00:36:06.164 "data_size": 63488 00:36:06.164 }, 00:36:06.164 { 00:36:06.164 "name": "BaseBdev2", 00:36:06.164 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:06.164 "is_configured": true, 00:36:06.164 "data_offset": 2048, 00:36:06.164 "data_size": 63488 00:36:06.164 }, 00:36:06.164 { 00:36:06.164 "name": "BaseBdev3", 00:36:06.164 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:06.164 "is_configured": true, 00:36:06.164 "data_offset": 2048, 00:36:06.164 "data_size": 63488 00:36:06.164 }, 00:36:06.164 { 00:36:06.164 "name": "BaseBdev4", 00:36:06.164 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:06.164 "is_configured": true, 00:36:06.164 "data_offset": 2048, 00:36:06.164 "data_size": 63488 00:36:06.164 } 00:36:06.164 ] 00:36:06.164 }' 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.164 [2024-12-06 18:34:36.995666] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:06.164 18:34:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.164 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:06.164 "name": "raid_bdev1", 00:36:06.164 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:06.164 "strip_size_kb": 64, 00:36:06.164 "state": "online", 00:36:06.164 "raid_level": "raid5f", 00:36:06.164 "superblock": true, 00:36:06.164 "num_base_bdevs": 4, 00:36:06.164 "num_base_bdevs_discovered": 3, 00:36:06.164 "num_base_bdevs_operational": 3, 00:36:06.164 "base_bdevs_list": [ 00:36:06.164 { 00:36:06.164 "name": null, 00:36:06.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:06.164 "is_configured": false, 00:36:06.164 "data_offset": 0, 00:36:06.164 "data_size": 63488 00:36:06.164 }, 00:36:06.164 { 00:36:06.164 "name": "BaseBdev2", 00:36:06.165 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:06.165 "is_configured": true, 00:36:06.165 "data_offset": 2048, 00:36:06.165 "data_size": 63488 00:36:06.165 }, 00:36:06.165 { 00:36:06.165 "name": "BaseBdev3", 00:36:06.165 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:06.165 "is_configured": true, 00:36:06.165 "data_offset": 2048, 00:36:06.165 "data_size": 63488 00:36:06.165 }, 00:36:06.165 { 00:36:06.165 "name": "BaseBdev4", 00:36:06.165 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:06.165 "is_configured": true, 00:36:06.165 "data_offset": 2048, 00:36:06.165 "data_size": 63488 00:36:06.165 } 00:36:06.165 ] 00:36:06.165 }' 00:36:06.165 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:06.165 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.733 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:06.733 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.733 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:06.733 [2024-12-06 18:34:37.455088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:06.733 [2024-12-06 18:34:37.455271] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:06.733 [2024-12-06 18:34:37.455294] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:06.733 [2024-12-06 18:34:37.455345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:06.733 [2024-12-06 18:34:37.471061] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:36:06.733 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.733 18:34:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:36:06.733 [2024-12-06 18:34:37.480835] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:07.673 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:07.673 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:07.674 "name": "raid_bdev1", 00:36:07.674 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:07.674 "strip_size_kb": 64, 00:36:07.674 "state": "online", 00:36:07.674 "raid_level": "raid5f", 00:36:07.674 "superblock": true, 00:36:07.674 "num_base_bdevs": 4, 00:36:07.674 "num_base_bdevs_discovered": 4, 00:36:07.674 "num_base_bdevs_operational": 4, 00:36:07.674 "process": { 00:36:07.674 "type": "rebuild", 00:36:07.674 "target": "spare", 00:36:07.674 "progress": { 00:36:07.674 "blocks": 19200, 00:36:07.674 "percent": 10 00:36:07.674 } 00:36:07.674 }, 00:36:07.674 "base_bdevs_list": [ 00:36:07.674 { 00:36:07.674 "name": "spare", 00:36:07.674 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:07.674 "is_configured": true, 00:36:07.674 "data_offset": 2048, 00:36:07.674 "data_size": 63488 00:36:07.674 }, 00:36:07.674 { 00:36:07.674 "name": "BaseBdev2", 00:36:07.674 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:07.674 "is_configured": true, 00:36:07.674 "data_offset": 2048, 00:36:07.674 "data_size": 63488 00:36:07.674 }, 00:36:07.674 { 00:36:07.674 "name": "BaseBdev3", 00:36:07.674 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:07.674 "is_configured": true, 00:36:07.674 "data_offset": 2048, 00:36:07.674 "data_size": 63488 00:36:07.674 }, 00:36:07.674 { 00:36:07.674 "name": "BaseBdev4", 00:36:07.674 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:07.674 "is_configured": true, 00:36:07.674 "data_offset": 2048, 00:36:07.674 "data_size": 63488 00:36:07.674 } 00:36:07.674 ] 00:36:07.674 }' 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:07.674 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.934 [2024-12-06 18:34:38.625164] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:07.934 [2024-12-06 18:34:38.689273] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:07.934 [2024-12-06 18:34:38.689505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:07.934 [2024-12-06 18:34:38.689531] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:07.934 [2024-12-06 18:34:38.689545] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:07.934 "name": "raid_bdev1", 00:36:07.934 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:07.934 "strip_size_kb": 64, 00:36:07.934 "state": "online", 00:36:07.934 "raid_level": "raid5f", 00:36:07.934 "superblock": true, 00:36:07.934 "num_base_bdevs": 4, 00:36:07.934 "num_base_bdevs_discovered": 3, 00:36:07.934 "num_base_bdevs_operational": 3, 00:36:07.934 "base_bdevs_list": [ 00:36:07.934 { 00:36:07.934 "name": null, 00:36:07.934 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:07.934 "is_configured": false, 00:36:07.934 "data_offset": 0, 00:36:07.934 "data_size": 63488 00:36:07.934 }, 00:36:07.934 { 00:36:07.934 "name": "BaseBdev2", 00:36:07.934 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:07.934 "is_configured": true, 00:36:07.934 "data_offset": 2048, 00:36:07.934 "data_size": 63488 00:36:07.934 }, 00:36:07.934 { 00:36:07.934 "name": "BaseBdev3", 00:36:07.934 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:07.934 "is_configured": true, 00:36:07.934 "data_offset": 2048, 00:36:07.934 "data_size": 63488 00:36:07.934 }, 00:36:07.934 { 00:36:07.934 "name": "BaseBdev4", 00:36:07.934 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:07.934 "is_configured": true, 00:36:07.934 "data_offset": 2048, 00:36:07.934 "data_size": 63488 00:36:07.934 } 00:36:07.934 ] 00:36:07.934 }' 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:07.934 18:34:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.194 18:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:08.194 18:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:08.194 18:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:08.194 [2024-12-06 18:34:39.139502] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:08.194 [2024-12-06 18:34:39.139745] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:08.194 [2024-12-06 18:34:39.139852] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:36:08.194 [2024-12-06 18:34:39.139944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:08.194 [2024-12-06 18:34:39.140584] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:08.194 [2024-12-06 18:34:39.140628] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:08.195 [2024-12-06 18:34:39.140728] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:08.195 [2024-12-06 18:34:39.140755] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:08.195 [2024-12-06 18:34:39.140768] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:08.195 [2024-12-06 18:34:39.140799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:08.454 [2024-12-06 18:34:39.155893] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:36:08.454 spare 00:36:08.454 18:34:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.454 18:34:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:36:08.454 [2024-12-06 18:34:39.164997] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:09.390 "name": "raid_bdev1", 00:36:09.390 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:09.390 "strip_size_kb": 64, 00:36:09.390 "state": "online", 00:36:09.390 "raid_level": "raid5f", 00:36:09.390 "superblock": true, 00:36:09.390 "num_base_bdevs": 4, 00:36:09.390 "num_base_bdevs_discovered": 4, 00:36:09.390 "num_base_bdevs_operational": 4, 00:36:09.390 "process": { 00:36:09.390 "type": "rebuild", 00:36:09.390 "target": "spare", 00:36:09.390 "progress": { 00:36:09.390 "blocks": 19200, 00:36:09.390 "percent": 10 00:36:09.390 } 00:36:09.390 }, 00:36:09.390 "base_bdevs_list": [ 00:36:09.390 { 00:36:09.390 "name": "spare", 00:36:09.390 "uuid": "0b4c43a4-35ac-525c-a364-51da484b7ab7", 00:36:09.390 "is_configured": true, 00:36:09.390 "data_offset": 2048, 00:36:09.390 "data_size": 63488 00:36:09.390 }, 00:36:09.390 { 00:36:09.390 "name": "BaseBdev2", 00:36:09.390 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:09.390 "is_configured": true, 00:36:09.390 "data_offset": 2048, 00:36:09.390 "data_size": 63488 00:36:09.390 }, 00:36:09.390 { 00:36:09.390 "name": "BaseBdev3", 00:36:09.390 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:09.390 "is_configured": true, 00:36:09.390 "data_offset": 2048, 00:36:09.390 "data_size": 63488 00:36:09.390 }, 00:36:09.390 { 00:36:09.390 "name": "BaseBdev4", 00:36:09.390 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:09.390 "is_configured": true, 00:36:09.390 "data_offset": 2048, 00:36:09.390 "data_size": 63488 00:36:09.390 } 00:36:09.390 ] 00:36:09.390 }' 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.390 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.390 [2024-12-06 18:34:40.308030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:09.649 [2024-12-06 18:34:40.371837] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:09.649 [2024-12-06 18:34:40.371892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:09.649 [2024-12-06 18:34:40.371914] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:09.649 [2024-12-06 18:34:40.371923] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:09.649 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.649 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:09.649 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:09.649 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:09.649 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:09.649 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:09.649 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:09.649 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:09.650 "name": "raid_bdev1", 00:36:09.650 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:09.650 "strip_size_kb": 64, 00:36:09.650 "state": "online", 00:36:09.650 "raid_level": "raid5f", 00:36:09.650 "superblock": true, 00:36:09.650 "num_base_bdevs": 4, 00:36:09.650 "num_base_bdevs_discovered": 3, 00:36:09.650 "num_base_bdevs_operational": 3, 00:36:09.650 "base_bdevs_list": [ 00:36:09.650 { 00:36:09.650 "name": null, 00:36:09.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.650 "is_configured": false, 00:36:09.650 "data_offset": 0, 00:36:09.650 "data_size": 63488 00:36:09.650 }, 00:36:09.650 { 00:36:09.650 "name": "BaseBdev2", 00:36:09.650 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:09.650 "is_configured": true, 00:36:09.650 "data_offset": 2048, 00:36:09.650 "data_size": 63488 00:36:09.650 }, 00:36:09.650 { 00:36:09.650 "name": "BaseBdev3", 00:36:09.650 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:09.650 "is_configured": true, 00:36:09.650 "data_offset": 2048, 00:36:09.650 "data_size": 63488 00:36:09.650 }, 00:36:09.650 { 00:36:09.650 "name": "BaseBdev4", 00:36:09.650 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:09.650 "is_configured": true, 00:36:09.650 "data_offset": 2048, 00:36:09.650 "data_size": 63488 00:36:09.650 } 00:36:09.650 ] 00:36:09.650 }' 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:09.650 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:09.908 "name": "raid_bdev1", 00:36:09.908 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:09.908 "strip_size_kb": 64, 00:36:09.908 "state": "online", 00:36:09.908 "raid_level": "raid5f", 00:36:09.908 "superblock": true, 00:36:09.908 "num_base_bdevs": 4, 00:36:09.908 "num_base_bdevs_discovered": 3, 00:36:09.908 "num_base_bdevs_operational": 3, 00:36:09.908 "base_bdevs_list": [ 00:36:09.908 { 00:36:09.908 "name": null, 00:36:09.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:09.908 "is_configured": false, 00:36:09.908 "data_offset": 0, 00:36:09.908 "data_size": 63488 00:36:09.908 }, 00:36:09.908 { 00:36:09.908 "name": "BaseBdev2", 00:36:09.908 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:09.908 "is_configured": true, 00:36:09.908 "data_offset": 2048, 00:36:09.908 "data_size": 63488 00:36:09.908 }, 00:36:09.908 { 00:36:09.908 "name": "BaseBdev3", 00:36:09.908 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:09.908 "is_configured": true, 00:36:09.908 "data_offset": 2048, 00:36:09.908 "data_size": 63488 00:36:09.908 }, 00:36:09.908 { 00:36:09.908 "name": "BaseBdev4", 00:36:09.908 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:09.908 "is_configured": true, 00:36:09.908 "data_offset": 2048, 00:36:09.908 "data_size": 63488 00:36:09.908 } 00:36:09.908 ] 00:36:09.908 }' 00:36:09.908 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:10.167 [2024-12-06 18:34:40.913728] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:10.167 [2024-12-06 18:34:40.913932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:10.167 [2024-12-06 18:34:40.913967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:36:10.167 [2024-12-06 18:34:40.913980] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:10.167 [2024-12-06 18:34:40.914461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:10.167 [2024-12-06 18:34:40.914484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:10.167 [2024-12-06 18:34:40.914564] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:10.167 [2024-12-06 18:34:40.914579] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:10.167 [2024-12-06 18:34:40.914619] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:10.167 [2024-12-06 18:34:40.914631] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:36:10.167 BaseBdev1 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.167 18:34:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:11.103 "name": "raid_bdev1", 00:36:11.103 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:11.103 "strip_size_kb": 64, 00:36:11.103 "state": "online", 00:36:11.103 "raid_level": "raid5f", 00:36:11.103 "superblock": true, 00:36:11.103 "num_base_bdevs": 4, 00:36:11.103 "num_base_bdevs_discovered": 3, 00:36:11.103 "num_base_bdevs_operational": 3, 00:36:11.103 "base_bdevs_list": [ 00:36:11.103 { 00:36:11.103 "name": null, 00:36:11.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.103 "is_configured": false, 00:36:11.103 "data_offset": 0, 00:36:11.103 "data_size": 63488 00:36:11.103 }, 00:36:11.103 { 00:36:11.103 "name": "BaseBdev2", 00:36:11.103 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:11.103 "is_configured": true, 00:36:11.103 "data_offset": 2048, 00:36:11.103 "data_size": 63488 00:36:11.103 }, 00:36:11.103 { 00:36:11.103 "name": "BaseBdev3", 00:36:11.103 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:11.103 "is_configured": true, 00:36:11.103 "data_offset": 2048, 00:36:11.103 "data_size": 63488 00:36:11.103 }, 00:36:11.103 { 00:36:11.103 "name": "BaseBdev4", 00:36:11.103 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:11.103 "is_configured": true, 00:36:11.103 "data_offset": 2048, 00:36:11.103 "data_size": 63488 00:36:11.103 } 00:36:11.103 ] 00:36:11.103 }' 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:11.103 18:34:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.668 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:11.668 "name": "raid_bdev1", 00:36:11.668 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:11.668 "strip_size_kb": 64, 00:36:11.668 "state": "online", 00:36:11.669 "raid_level": "raid5f", 00:36:11.669 "superblock": true, 00:36:11.669 "num_base_bdevs": 4, 00:36:11.669 "num_base_bdevs_discovered": 3, 00:36:11.669 "num_base_bdevs_operational": 3, 00:36:11.669 "base_bdevs_list": [ 00:36:11.669 { 00:36:11.669 "name": null, 00:36:11.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:11.669 "is_configured": false, 00:36:11.669 "data_offset": 0, 00:36:11.669 "data_size": 63488 00:36:11.669 }, 00:36:11.669 { 00:36:11.669 "name": "BaseBdev2", 00:36:11.669 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:11.669 "is_configured": true, 00:36:11.669 "data_offset": 2048, 00:36:11.669 "data_size": 63488 00:36:11.669 }, 00:36:11.669 { 00:36:11.669 "name": "BaseBdev3", 00:36:11.669 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:11.669 "is_configured": true, 00:36:11.669 "data_offset": 2048, 00:36:11.669 "data_size": 63488 00:36:11.669 }, 00:36:11.669 { 00:36:11.669 "name": "BaseBdev4", 00:36:11.669 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:11.669 "is_configured": true, 00:36:11.669 "data_offset": 2048, 00:36:11.669 "data_size": 63488 00:36:11.669 } 00:36:11.669 ] 00:36:11.669 }' 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:11.669 [2024-12-06 18:34:42.503668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:11.669 [2024-12-06 18:34:42.503844] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:11.669 [2024-12-06 18:34:42.503867] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:11.669 request: 00:36:11.669 { 00:36:11.669 "base_bdev": "BaseBdev1", 00:36:11.669 "raid_bdev": "raid_bdev1", 00:36:11.669 "method": "bdev_raid_add_base_bdev", 00:36:11.669 "req_id": 1 00:36:11.669 } 00:36:11.669 Got JSON-RPC error response 00:36:11.669 response: 00:36:11.669 { 00:36:11.669 "code": -22, 00:36:11.669 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:11.669 } 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:11.669 18:34:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.602 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:12.865 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.865 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:12.865 "name": "raid_bdev1", 00:36:12.865 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:12.865 "strip_size_kb": 64, 00:36:12.865 "state": "online", 00:36:12.865 "raid_level": "raid5f", 00:36:12.865 "superblock": true, 00:36:12.865 "num_base_bdevs": 4, 00:36:12.865 "num_base_bdevs_discovered": 3, 00:36:12.865 "num_base_bdevs_operational": 3, 00:36:12.865 "base_bdevs_list": [ 00:36:12.865 { 00:36:12.865 "name": null, 00:36:12.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:12.866 "is_configured": false, 00:36:12.866 "data_offset": 0, 00:36:12.866 "data_size": 63488 00:36:12.866 }, 00:36:12.866 { 00:36:12.866 "name": "BaseBdev2", 00:36:12.866 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:12.866 "is_configured": true, 00:36:12.866 "data_offset": 2048, 00:36:12.866 "data_size": 63488 00:36:12.866 }, 00:36:12.866 { 00:36:12.866 "name": "BaseBdev3", 00:36:12.866 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:12.866 "is_configured": true, 00:36:12.866 "data_offset": 2048, 00:36:12.866 "data_size": 63488 00:36:12.866 }, 00:36:12.866 { 00:36:12.866 "name": "BaseBdev4", 00:36:12.866 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:12.866 "is_configured": true, 00:36:12.866 "data_offset": 2048, 00:36:12.866 "data_size": 63488 00:36:12.866 } 00:36:12.866 ] 00:36:12.866 }' 00:36:12.866 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:12.866 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:13.133 "name": "raid_bdev1", 00:36:13.133 "uuid": "56f0aa97-aacf-40bc-a0b2-aed30bb07127", 00:36:13.133 "strip_size_kb": 64, 00:36:13.133 "state": "online", 00:36:13.133 "raid_level": "raid5f", 00:36:13.133 "superblock": true, 00:36:13.133 "num_base_bdevs": 4, 00:36:13.133 "num_base_bdevs_discovered": 3, 00:36:13.133 "num_base_bdevs_operational": 3, 00:36:13.133 "base_bdevs_list": [ 00:36:13.133 { 00:36:13.133 "name": null, 00:36:13.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:13.133 "is_configured": false, 00:36:13.133 "data_offset": 0, 00:36:13.133 "data_size": 63488 00:36:13.133 }, 00:36:13.133 { 00:36:13.133 "name": "BaseBdev2", 00:36:13.133 "uuid": "663eed2d-8c19-5bfd-8f4a-43d53837c6a9", 00:36:13.133 "is_configured": true, 00:36:13.133 "data_offset": 2048, 00:36:13.133 "data_size": 63488 00:36:13.133 }, 00:36:13.133 { 00:36:13.133 "name": "BaseBdev3", 00:36:13.133 "uuid": "dc06577f-958a-55da-9a21-7b73771841e7", 00:36:13.133 "is_configured": true, 00:36:13.133 "data_offset": 2048, 00:36:13.133 "data_size": 63488 00:36:13.133 }, 00:36:13.133 { 00:36:13.133 "name": "BaseBdev4", 00:36:13.133 "uuid": "efe03b1c-90dc-58df-84ba-78ff476527d1", 00:36:13.133 "is_configured": true, 00:36:13.133 "data_offset": 2048, 00:36:13.133 "data_size": 63488 00:36:13.133 } 00:36:13.133 ] 00:36:13.133 }' 00:36:13.133 18:34:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84837 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84837 ']' 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84837 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.133 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84837 00:36:13.392 killing process with pid 84837 00:36:13.392 Received shutdown signal, test time was about 60.000000 seconds 00:36:13.393 00:36:13.393 Latency(us) 00:36:13.393 [2024-12-06T18:34:44.342Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.393 [2024-12-06T18:34:44.342Z] =================================================================================================================== 00:36:13.393 [2024-12-06T18:34:44.342Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:13.393 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:13.393 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:13.393 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84837' 00:36:13.393 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84837 00:36:13.393 [2024-12-06 18:34:44.100270] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:13.393 [2024-12-06 18:34:44.100400] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:13.393 18:34:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84837 00:36:13.393 [2024-12-06 18:34:44.100478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:13.393 [2024-12-06 18:34:44.100494] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:36:13.651 [2024-12-06 18:34:44.582442] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:15.029 18:34:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:36:15.029 00:36:15.029 real 0m26.762s 00:36:15.029 user 0m33.055s 00:36:15.029 sys 0m3.623s 00:36:15.029 18:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.029 18:34:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:36:15.029 ************************************ 00:36:15.029 END TEST raid5f_rebuild_test_sb 00:36:15.029 ************************************ 00:36:15.029 18:34:45 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:36:15.029 18:34:45 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:36:15.029 18:34:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:15.029 18:34:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.029 18:34:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:15.029 ************************************ 00:36:15.029 START TEST raid_state_function_test_sb_4k 00:36:15.029 ************************************ 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85643 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:15.029 Process raid pid: 85643 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85643' 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85643 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85643 ']' 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:15.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:15.029 18:34:45 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:15.029 [2024-12-06 18:34:45.882377] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:15.029 [2024-12-06 18:34:45.882518] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:15.287 [2024-12-06 18:34:46.066008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:15.287 [2024-12-06 18:34:46.179500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:15.546 [2024-12-06 18:34:46.392053] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:15.546 [2024-12-06 18:34:46.392098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:15.804 [2024-12-06 18:34:46.705003] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:15.804 [2024-12-06 18:34:46.705076] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:15.804 [2024-12-06 18:34:46.705089] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:15.804 [2024-12-06 18:34:46.705103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:15.804 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.064 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.064 "name": "Existed_Raid", 00:36:16.064 "uuid": "9df9c30d-c5c6-4aec-95cc-f2499c9a4b6d", 00:36:16.064 "strip_size_kb": 0, 00:36:16.064 "state": "configuring", 00:36:16.064 "raid_level": "raid1", 00:36:16.064 "superblock": true, 00:36:16.064 "num_base_bdevs": 2, 00:36:16.064 "num_base_bdevs_discovered": 0, 00:36:16.064 "num_base_bdevs_operational": 2, 00:36:16.064 "base_bdevs_list": [ 00:36:16.064 { 00:36:16.064 "name": "BaseBdev1", 00:36:16.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.064 "is_configured": false, 00:36:16.064 "data_offset": 0, 00:36:16.064 "data_size": 0 00:36:16.064 }, 00:36:16.064 { 00:36:16.064 "name": "BaseBdev2", 00:36:16.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.064 "is_configured": false, 00:36:16.064 "data_offset": 0, 00:36:16.064 "data_size": 0 00:36:16.064 } 00:36:16.064 ] 00:36:16.064 }' 00:36:16.064 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.064 18:34:46 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.323 [2024-12-06 18:34:47.156294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:16.323 [2024-12-06 18:34:47.156334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.323 [2024-12-06 18:34:47.168302] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:16.323 [2024-12-06 18:34:47.168347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:16.323 [2024-12-06 18:34:47.168357] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:16.323 [2024-12-06 18:34:47.168372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.323 [2024-12-06 18:34:47.216612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:16.323 BaseBdev1 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.323 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.323 [ 00:36:16.323 { 00:36:16.323 "name": "BaseBdev1", 00:36:16.323 "aliases": [ 00:36:16.323 "9b9721af-ece8-45bd-9b2a-039534976cf8" 00:36:16.323 ], 00:36:16.323 "product_name": "Malloc disk", 00:36:16.323 "block_size": 4096, 00:36:16.323 "num_blocks": 8192, 00:36:16.323 "uuid": "9b9721af-ece8-45bd-9b2a-039534976cf8", 00:36:16.323 "assigned_rate_limits": { 00:36:16.323 "rw_ios_per_sec": 0, 00:36:16.323 "rw_mbytes_per_sec": 0, 00:36:16.323 "r_mbytes_per_sec": 0, 00:36:16.323 "w_mbytes_per_sec": 0 00:36:16.323 }, 00:36:16.323 "claimed": true, 00:36:16.323 "claim_type": "exclusive_write", 00:36:16.323 "zoned": false, 00:36:16.323 "supported_io_types": { 00:36:16.323 "read": true, 00:36:16.323 "write": true, 00:36:16.323 "unmap": true, 00:36:16.323 "flush": true, 00:36:16.323 "reset": true, 00:36:16.323 "nvme_admin": false, 00:36:16.323 "nvme_io": false, 00:36:16.323 "nvme_io_md": false, 00:36:16.323 "write_zeroes": true, 00:36:16.323 "zcopy": true, 00:36:16.323 "get_zone_info": false, 00:36:16.323 "zone_management": false, 00:36:16.323 "zone_append": false, 00:36:16.323 "compare": false, 00:36:16.323 "compare_and_write": false, 00:36:16.323 "abort": true, 00:36:16.323 "seek_hole": false, 00:36:16.323 "seek_data": false, 00:36:16.323 "copy": true, 00:36:16.323 "nvme_iov_md": false 00:36:16.323 }, 00:36:16.323 "memory_domains": [ 00:36:16.323 { 00:36:16.323 "dma_device_id": "system", 00:36:16.323 "dma_device_type": 1 00:36:16.323 }, 00:36:16.323 { 00:36:16.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:16.323 "dma_device_type": 2 00:36:16.323 } 00:36:16.323 ], 00:36:16.323 "driver_specific": {} 00:36:16.323 } 00:36:16.323 ] 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.324 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.583 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.583 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.583 "name": "Existed_Raid", 00:36:16.583 "uuid": "01d143b1-0c3f-4475-8f62-66d5f63b6acb", 00:36:16.583 "strip_size_kb": 0, 00:36:16.583 "state": "configuring", 00:36:16.583 "raid_level": "raid1", 00:36:16.583 "superblock": true, 00:36:16.583 "num_base_bdevs": 2, 00:36:16.583 "num_base_bdevs_discovered": 1, 00:36:16.583 "num_base_bdevs_operational": 2, 00:36:16.583 "base_bdevs_list": [ 00:36:16.583 { 00:36:16.583 "name": "BaseBdev1", 00:36:16.583 "uuid": "9b9721af-ece8-45bd-9b2a-039534976cf8", 00:36:16.583 "is_configured": true, 00:36:16.583 "data_offset": 256, 00:36:16.583 "data_size": 7936 00:36:16.583 }, 00:36:16.583 { 00:36:16.583 "name": "BaseBdev2", 00:36:16.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.583 "is_configured": false, 00:36:16.583 "data_offset": 0, 00:36:16.583 "data_size": 0 00:36:16.583 } 00:36:16.583 ] 00:36:16.583 }' 00:36:16.583 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.583 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.842 [2024-12-06 18:34:47.644053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:16.842 [2024-12-06 18:34:47.644094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.842 [2024-12-06 18:34:47.656085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:16.842 [2024-12-06 18:34:47.658014] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:16.842 [2024-12-06 18:34:47.658059] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:16.842 "name": "Existed_Raid", 00:36:16.842 "uuid": "f6392cf5-f068-4d19-894b-0edd212901f6", 00:36:16.842 "strip_size_kb": 0, 00:36:16.842 "state": "configuring", 00:36:16.842 "raid_level": "raid1", 00:36:16.842 "superblock": true, 00:36:16.842 "num_base_bdevs": 2, 00:36:16.842 "num_base_bdevs_discovered": 1, 00:36:16.842 "num_base_bdevs_operational": 2, 00:36:16.842 "base_bdevs_list": [ 00:36:16.842 { 00:36:16.842 "name": "BaseBdev1", 00:36:16.842 "uuid": "9b9721af-ece8-45bd-9b2a-039534976cf8", 00:36:16.842 "is_configured": true, 00:36:16.842 "data_offset": 256, 00:36:16.842 "data_size": 7936 00:36:16.842 }, 00:36:16.842 { 00:36:16.842 "name": "BaseBdev2", 00:36:16.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:16.842 "is_configured": false, 00:36:16.842 "data_offset": 0, 00:36:16.842 "data_size": 0 00:36:16.842 } 00:36:16.842 ] 00:36:16.842 }' 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:16.842 18:34:47 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.132 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:36:17.132 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.132 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 [2024-12-06 18:34:48.097680] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:17.439 [2024-12-06 18:34:48.097913] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:17.439 [2024-12-06 18:34:48.097928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:17.439 [2024-12-06 18:34:48.098216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:17.439 [2024-12-06 18:34:48.098366] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:17.439 [2024-12-06 18:34:48.098390] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:17.439 BaseBdev2 00:36:17.439 [2024-12-06 18:34:48.098530] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 [ 00:36:17.439 { 00:36:17.439 "name": "BaseBdev2", 00:36:17.439 "aliases": [ 00:36:17.439 "b2386909-ab31-4cc8-b564-a82a69a143ee" 00:36:17.439 ], 00:36:17.439 "product_name": "Malloc disk", 00:36:17.439 "block_size": 4096, 00:36:17.439 "num_blocks": 8192, 00:36:17.439 "uuid": "b2386909-ab31-4cc8-b564-a82a69a143ee", 00:36:17.439 "assigned_rate_limits": { 00:36:17.439 "rw_ios_per_sec": 0, 00:36:17.439 "rw_mbytes_per_sec": 0, 00:36:17.439 "r_mbytes_per_sec": 0, 00:36:17.439 "w_mbytes_per_sec": 0 00:36:17.439 }, 00:36:17.439 "claimed": true, 00:36:17.439 "claim_type": "exclusive_write", 00:36:17.439 "zoned": false, 00:36:17.439 "supported_io_types": { 00:36:17.439 "read": true, 00:36:17.439 "write": true, 00:36:17.439 "unmap": true, 00:36:17.439 "flush": true, 00:36:17.439 "reset": true, 00:36:17.439 "nvme_admin": false, 00:36:17.439 "nvme_io": false, 00:36:17.439 "nvme_io_md": false, 00:36:17.439 "write_zeroes": true, 00:36:17.439 "zcopy": true, 00:36:17.439 "get_zone_info": false, 00:36:17.439 "zone_management": false, 00:36:17.439 "zone_append": false, 00:36:17.439 "compare": false, 00:36:17.439 "compare_and_write": false, 00:36:17.439 "abort": true, 00:36:17.439 "seek_hole": false, 00:36:17.439 "seek_data": false, 00:36:17.439 "copy": true, 00:36:17.439 "nvme_iov_md": false 00:36:17.439 }, 00:36:17.439 "memory_domains": [ 00:36:17.439 { 00:36:17.439 "dma_device_id": "system", 00:36:17.439 "dma_device_type": 1 00:36:17.439 }, 00:36:17.439 { 00:36:17.439 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.439 "dma_device_type": 2 00:36:17.439 } 00:36:17.439 ], 00:36:17.439 "driver_specific": {} 00:36:17.439 } 00:36:17.439 ] 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:17.439 "name": "Existed_Raid", 00:36:17.439 "uuid": "f6392cf5-f068-4d19-894b-0edd212901f6", 00:36:17.439 "strip_size_kb": 0, 00:36:17.439 "state": "online", 00:36:17.439 "raid_level": "raid1", 00:36:17.439 "superblock": true, 00:36:17.439 "num_base_bdevs": 2, 00:36:17.439 "num_base_bdevs_discovered": 2, 00:36:17.439 "num_base_bdevs_operational": 2, 00:36:17.439 "base_bdevs_list": [ 00:36:17.439 { 00:36:17.439 "name": "BaseBdev1", 00:36:17.439 "uuid": "9b9721af-ece8-45bd-9b2a-039534976cf8", 00:36:17.439 "is_configured": true, 00:36:17.439 "data_offset": 256, 00:36:17.439 "data_size": 7936 00:36:17.439 }, 00:36:17.439 { 00:36:17.439 "name": "BaseBdev2", 00:36:17.439 "uuid": "b2386909-ab31-4cc8-b564-a82a69a143ee", 00:36:17.439 "is_configured": true, 00:36:17.439 "data_offset": 256, 00:36:17.439 "data_size": 7936 00:36:17.439 } 00:36:17.439 ] 00:36:17.439 }' 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:17.439 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.699 [2024-12-06 18:34:48.573295] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:17.699 "name": "Existed_Raid", 00:36:17.699 "aliases": [ 00:36:17.699 "f6392cf5-f068-4d19-894b-0edd212901f6" 00:36:17.699 ], 00:36:17.699 "product_name": "Raid Volume", 00:36:17.699 "block_size": 4096, 00:36:17.699 "num_blocks": 7936, 00:36:17.699 "uuid": "f6392cf5-f068-4d19-894b-0edd212901f6", 00:36:17.699 "assigned_rate_limits": { 00:36:17.699 "rw_ios_per_sec": 0, 00:36:17.699 "rw_mbytes_per_sec": 0, 00:36:17.699 "r_mbytes_per_sec": 0, 00:36:17.699 "w_mbytes_per_sec": 0 00:36:17.699 }, 00:36:17.699 "claimed": false, 00:36:17.699 "zoned": false, 00:36:17.699 "supported_io_types": { 00:36:17.699 "read": true, 00:36:17.699 "write": true, 00:36:17.699 "unmap": false, 00:36:17.699 "flush": false, 00:36:17.699 "reset": true, 00:36:17.699 "nvme_admin": false, 00:36:17.699 "nvme_io": false, 00:36:17.699 "nvme_io_md": false, 00:36:17.699 "write_zeroes": true, 00:36:17.699 "zcopy": false, 00:36:17.699 "get_zone_info": false, 00:36:17.699 "zone_management": false, 00:36:17.699 "zone_append": false, 00:36:17.699 "compare": false, 00:36:17.699 "compare_and_write": false, 00:36:17.699 "abort": false, 00:36:17.699 "seek_hole": false, 00:36:17.699 "seek_data": false, 00:36:17.699 "copy": false, 00:36:17.699 "nvme_iov_md": false 00:36:17.699 }, 00:36:17.699 "memory_domains": [ 00:36:17.699 { 00:36:17.699 "dma_device_id": "system", 00:36:17.699 "dma_device_type": 1 00:36:17.699 }, 00:36:17.699 { 00:36:17.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.699 "dma_device_type": 2 00:36:17.699 }, 00:36:17.699 { 00:36:17.699 "dma_device_id": "system", 00:36:17.699 "dma_device_type": 1 00:36:17.699 }, 00:36:17.699 { 00:36:17.699 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:17.699 "dma_device_type": 2 00:36:17.699 } 00:36:17.699 ], 00:36:17.699 "driver_specific": { 00:36:17.699 "raid": { 00:36:17.699 "uuid": "f6392cf5-f068-4d19-894b-0edd212901f6", 00:36:17.699 "strip_size_kb": 0, 00:36:17.699 "state": "online", 00:36:17.699 "raid_level": "raid1", 00:36:17.699 "superblock": true, 00:36:17.699 "num_base_bdevs": 2, 00:36:17.699 "num_base_bdevs_discovered": 2, 00:36:17.699 "num_base_bdevs_operational": 2, 00:36:17.699 "base_bdevs_list": [ 00:36:17.699 { 00:36:17.699 "name": "BaseBdev1", 00:36:17.699 "uuid": "9b9721af-ece8-45bd-9b2a-039534976cf8", 00:36:17.699 "is_configured": true, 00:36:17.699 "data_offset": 256, 00:36:17.699 "data_size": 7936 00:36:17.699 }, 00:36:17.699 { 00:36:17.699 "name": "BaseBdev2", 00:36:17.699 "uuid": "b2386909-ab31-4cc8-b564-a82a69a143ee", 00:36:17.699 "is_configured": true, 00:36:17.699 "data_offset": 256, 00:36:17.699 "data_size": 7936 00:36:17.699 } 00:36:17.699 ] 00:36:17.699 } 00:36:17.699 } 00:36:17.699 }' 00:36:17.699 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:17.959 BaseBdev2' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:17.959 [2024-12-06 18:34:48.792776] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.959 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:18.219 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.219 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:18.219 "name": "Existed_Raid", 00:36:18.219 "uuid": "f6392cf5-f068-4d19-894b-0edd212901f6", 00:36:18.219 "strip_size_kb": 0, 00:36:18.219 "state": "online", 00:36:18.219 "raid_level": "raid1", 00:36:18.219 "superblock": true, 00:36:18.219 "num_base_bdevs": 2, 00:36:18.219 "num_base_bdevs_discovered": 1, 00:36:18.219 "num_base_bdevs_operational": 1, 00:36:18.219 "base_bdevs_list": [ 00:36:18.219 { 00:36:18.219 "name": null, 00:36:18.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:18.219 "is_configured": false, 00:36:18.219 "data_offset": 0, 00:36:18.219 "data_size": 7936 00:36:18.219 }, 00:36:18.219 { 00:36:18.219 "name": "BaseBdev2", 00:36:18.219 "uuid": "b2386909-ab31-4cc8-b564-a82a69a143ee", 00:36:18.219 "is_configured": true, 00:36:18.219 "data_offset": 256, 00:36:18.219 "data_size": 7936 00:36:18.219 } 00:36:18.219 ] 00:36:18.219 }' 00:36:18.219 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:18.219 18:34:48 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:18.478 [2024-12-06 18:34:49.333143] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:18.478 [2024-12-06 18:34:49.333259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:18.478 [2024-12-06 18:34:49.425684] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:18.478 [2024-12-06 18:34:49.425734] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:18.478 [2024-12-06 18:34:49.425748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:18.478 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:36:18.737 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85643 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85643 ']' 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85643 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85643 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:18.738 killing process with pid 85643 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85643' 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85643 00:36:18.738 [2024-12-06 18:34:49.520644] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:18.738 18:34:49 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85643 00:36:18.738 [2024-12-06 18:34:49.536958] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:19.675 18:34:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:36:19.675 00:36:19.675 real 0m4.849s 00:36:19.675 user 0m6.904s 00:36:19.675 sys 0m0.978s 00:36:19.675 18:34:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:19.675 ************************************ 00:36:19.675 END TEST raid_state_function_test_sb_4k 00:36:19.675 ************************************ 00:36:19.675 18:34:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:19.935 18:34:50 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:36:19.935 18:34:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:19.935 18:34:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:19.935 18:34:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:19.935 ************************************ 00:36:19.935 START TEST raid_superblock_test_4k 00:36:19.935 ************************************ 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85895 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85895 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85895 ']' 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.935 18:34:50 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:19.935 [2024-12-06 18:34:50.819810] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:19.935 [2024-12-06 18:34:50.820457] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85895 ] 00:36:20.193 [2024-12-06 18:34:51.006283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:20.193 [2024-12-06 18:34:51.108534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:20.452 [2024-12-06 18:34:51.309357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:20.452 [2024-12-06 18:34:51.309409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:20.710 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:20.711 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:36:20.711 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.711 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:20.970 malloc1 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:20.970 [2024-12-06 18:34:51.683222] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:20.970 [2024-12-06 18:34:51.683292] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.970 [2024-12-06 18:34:51.683315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:20.970 [2024-12-06 18:34:51.683327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.970 [2024-12-06 18:34:51.685615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.970 [2024-12-06 18:34:51.685658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:20.970 pt1 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:20.970 malloc2 00:36:20.970 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:20.971 [2024-12-06 18:34:51.737982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:20.971 [2024-12-06 18:34:51.738036] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:20.971 [2024-12-06 18:34:51.738063] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:20.971 [2024-12-06 18:34:51.738074] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:20.971 [2024-12-06 18:34:51.740329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:20.971 [2024-12-06 18:34:51.740369] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:20.971 pt2 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:20.971 [2024-12-06 18:34:51.750023] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:20.971 [2024-12-06 18:34:51.751937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:20.971 [2024-12-06 18:34:51.752098] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:20.971 [2024-12-06 18:34:51.752116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:20.971 [2024-12-06 18:34:51.752362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:20.971 [2024-12-06 18:34:51.752503] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:20.971 [2024-12-06 18:34:51.752519] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:20.971 [2024-12-06 18:34:51.752647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:20.971 "name": "raid_bdev1", 00:36:20.971 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:20.971 "strip_size_kb": 0, 00:36:20.971 "state": "online", 00:36:20.971 "raid_level": "raid1", 00:36:20.971 "superblock": true, 00:36:20.971 "num_base_bdevs": 2, 00:36:20.971 "num_base_bdevs_discovered": 2, 00:36:20.971 "num_base_bdevs_operational": 2, 00:36:20.971 "base_bdevs_list": [ 00:36:20.971 { 00:36:20.971 "name": "pt1", 00:36:20.971 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:20.971 "is_configured": true, 00:36:20.971 "data_offset": 256, 00:36:20.971 "data_size": 7936 00:36:20.971 }, 00:36:20.971 { 00:36:20.971 "name": "pt2", 00:36:20.971 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:20.971 "is_configured": true, 00:36:20.971 "data_offset": 256, 00:36:20.971 "data_size": 7936 00:36:20.971 } 00:36:20.971 ] 00:36:20.971 }' 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:20.971 18:34:51 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.230 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:21.230 [2024-12-06 18:34:52.169596] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:21.489 "name": "raid_bdev1", 00:36:21.489 "aliases": [ 00:36:21.489 "3da62151-9e21-4eeb-bd29-f90010341959" 00:36:21.489 ], 00:36:21.489 "product_name": "Raid Volume", 00:36:21.489 "block_size": 4096, 00:36:21.489 "num_blocks": 7936, 00:36:21.489 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:21.489 "assigned_rate_limits": { 00:36:21.489 "rw_ios_per_sec": 0, 00:36:21.489 "rw_mbytes_per_sec": 0, 00:36:21.489 "r_mbytes_per_sec": 0, 00:36:21.489 "w_mbytes_per_sec": 0 00:36:21.489 }, 00:36:21.489 "claimed": false, 00:36:21.489 "zoned": false, 00:36:21.489 "supported_io_types": { 00:36:21.489 "read": true, 00:36:21.489 "write": true, 00:36:21.489 "unmap": false, 00:36:21.489 "flush": false, 00:36:21.489 "reset": true, 00:36:21.489 "nvme_admin": false, 00:36:21.489 "nvme_io": false, 00:36:21.489 "nvme_io_md": false, 00:36:21.489 "write_zeroes": true, 00:36:21.489 "zcopy": false, 00:36:21.489 "get_zone_info": false, 00:36:21.489 "zone_management": false, 00:36:21.489 "zone_append": false, 00:36:21.489 "compare": false, 00:36:21.489 "compare_and_write": false, 00:36:21.489 "abort": false, 00:36:21.489 "seek_hole": false, 00:36:21.489 "seek_data": false, 00:36:21.489 "copy": false, 00:36:21.489 "nvme_iov_md": false 00:36:21.489 }, 00:36:21.489 "memory_domains": [ 00:36:21.489 { 00:36:21.489 "dma_device_id": "system", 00:36:21.489 "dma_device_type": 1 00:36:21.489 }, 00:36:21.489 { 00:36:21.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:21.489 "dma_device_type": 2 00:36:21.489 }, 00:36:21.489 { 00:36:21.489 "dma_device_id": "system", 00:36:21.489 "dma_device_type": 1 00:36:21.489 }, 00:36:21.489 { 00:36:21.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:21.489 "dma_device_type": 2 00:36:21.489 } 00:36:21.489 ], 00:36:21.489 "driver_specific": { 00:36:21.489 "raid": { 00:36:21.489 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:21.489 "strip_size_kb": 0, 00:36:21.489 "state": "online", 00:36:21.489 "raid_level": "raid1", 00:36:21.489 "superblock": true, 00:36:21.489 "num_base_bdevs": 2, 00:36:21.489 "num_base_bdevs_discovered": 2, 00:36:21.489 "num_base_bdevs_operational": 2, 00:36:21.489 "base_bdevs_list": [ 00:36:21.489 { 00:36:21.489 "name": "pt1", 00:36:21.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:21.489 "is_configured": true, 00:36:21.489 "data_offset": 256, 00:36:21.489 "data_size": 7936 00:36:21.489 }, 00:36:21.489 { 00:36:21.489 "name": "pt2", 00:36:21.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:21.489 "is_configured": true, 00:36:21.489 "data_offset": 256, 00:36:21.489 "data_size": 7936 00:36:21.489 } 00:36:21.489 ] 00:36:21.489 } 00:36:21.489 } 00:36:21.489 }' 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:21.489 pt2' 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:21.489 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.490 [2024-12-06 18:34:52.385355] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=3da62151-9e21-4eeb-bd29-f90010341959 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 3da62151-9e21-4eeb-bd29-f90010341959 ']' 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.490 [2024-12-06 18:34:52.425034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:21.490 [2024-12-06 18:34:52.425060] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:21.490 [2024-12-06 18:34:52.425121] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:21.490 [2024-12-06 18:34:52.425180] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:21.490 [2024-12-06 18:34:52.425194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.490 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.748 [2024-12-06 18:34:52.544885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:21.748 [2024-12-06 18:34:52.546875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:21.748 [2024-12-06 18:34:52.546936] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:21.748 [2024-12-06 18:34:52.546984] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:21.748 [2024-12-06 18:34:52.547000] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:21.748 [2024-12-06 18:34:52.547011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:36:21.748 request: 00:36:21.748 { 00:36:21.748 "name": "raid_bdev1", 00:36:21.748 "raid_level": "raid1", 00:36:21.748 "base_bdevs": [ 00:36:21.748 "malloc1", 00:36:21.748 "malloc2" 00:36:21.748 ], 00:36:21.748 "superblock": false, 00:36:21.748 "method": "bdev_raid_create", 00:36:21.748 "req_id": 1 00:36:21.748 } 00:36:21.748 Got JSON-RPC error response 00:36:21.748 response: 00:36:21.748 { 00:36:21.748 "code": -17, 00:36:21.748 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:21.748 } 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:36:21.748 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.749 [2024-12-06 18:34:52.604793] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:21.749 [2024-12-06 18:34:52.604845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:21.749 [2024-12-06 18:34:52.604880] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:21.749 [2024-12-06 18:34:52.604894] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:21.749 [2024-12-06 18:34:52.607274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:21.749 [2024-12-06 18:34:52.607318] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:21.749 [2024-12-06 18:34:52.607382] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:21.749 [2024-12-06 18:34:52.607433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:21.749 pt1 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:21.749 "name": "raid_bdev1", 00:36:21.749 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:21.749 "strip_size_kb": 0, 00:36:21.749 "state": "configuring", 00:36:21.749 "raid_level": "raid1", 00:36:21.749 "superblock": true, 00:36:21.749 "num_base_bdevs": 2, 00:36:21.749 "num_base_bdevs_discovered": 1, 00:36:21.749 "num_base_bdevs_operational": 2, 00:36:21.749 "base_bdevs_list": [ 00:36:21.749 { 00:36:21.749 "name": "pt1", 00:36:21.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:21.749 "is_configured": true, 00:36:21.749 "data_offset": 256, 00:36:21.749 "data_size": 7936 00:36:21.749 }, 00:36:21.749 { 00:36:21.749 "name": null, 00:36:21.749 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:21.749 "is_configured": false, 00:36:21.749 "data_offset": 256, 00:36:21.749 "data_size": 7936 00:36:21.749 } 00:36:21.749 ] 00:36:21.749 }' 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:21.749 18:34:52 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.316 [2024-12-06 18:34:53.016223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:22.316 [2024-12-06 18:34:53.016283] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:22.316 [2024-12-06 18:34:53.016301] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:36:22.316 [2024-12-06 18:34:53.016314] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:22.316 [2024-12-06 18:34:53.016679] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:22.316 [2024-12-06 18:34:53.016711] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:22.316 [2024-12-06 18:34:53.016773] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:22.316 [2024-12-06 18:34:53.016798] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:22.316 [2024-12-06 18:34:53.016893] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:22.316 [2024-12-06 18:34:53.016905] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:22.316 [2024-12-06 18:34:53.017153] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:22.316 [2024-12-06 18:34:53.017313] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:22.316 [2024-12-06 18:34:53.017324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:36:22.316 [2024-12-06 18:34:53.017465] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:22.316 pt2 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.316 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:22.316 "name": "raid_bdev1", 00:36:22.316 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:22.316 "strip_size_kb": 0, 00:36:22.316 "state": "online", 00:36:22.316 "raid_level": "raid1", 00:36:22.316 "superblock": true, 00:36:22.316 "num_base_bdevs": 2, 00:36:22.316 "num_base_bdevs_discovered": 2, 00:36:22.317 "num_base_bdevs_operational": 2, 00:36:22.317 "base_bdevs_list": [ 00:36:22.317 { 00:36:22.317 "name": "pt1", 00:36:22.317 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:22.317 "is_configured": true, 00:36:22.317 "data_offset": 256, 00:36:22.317 "data_size": 7936 00:36:22.317 }, 00:36:22.317 { 00:36:22.317 "name": "pt2", 00:36:22.317 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:22.317 "is_configured": true, 00:36:22.317 "data_offset": 256, 00:36:22.317 "data_size": 7936 00:36:22.317 } 00:36:22.317 ] 00:36:22.317 }' 00:36:22.317 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:22.317 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.575 [2024-12-06 18:34:53.411839] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.575 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:22.575 "name": "raid_bdev1", 00:36:22.575 "aliases": [ 00:36:22.575 "3da62151-9e21-4eeb-bd29-f90010341959" 00:36:22.575 ], 00:36:22.575 "product_name": "Raid Volume", 00:36:22.575 "block_size": 4096, 00:36:22.575 "num_blocks": 7936, 00:36:22.575 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:22.575 "assigned_rate_limits": { 00:36:22.575 "rw_ios_per_sec": 0, 00:36:22.575 "rw_mbytes_per_sec": 0, 00:36:22.575 "r_mbytes_per_sec": 0, 00:36:22.575 "w_mbytes_per_sec": 0 00:36:22.575 }, 00:36:22.575 "claimed": false, 00:36:22.575 "zoned": false, 00:36:22.576 "supported_io_types": { 00:36:22.576 "read": true, 00:36:22.576 "write": true, 00:36:22.576 "unmap": false, 00:36:22.576 "flush": false, 00:36:22.576 "reset": true, 00:36:22.576 "nvme_admin": false, 00:36:22.576 "nvme_io": false, 00:36:22.576 "nvme_io_md": false, 00:36:22.576 "write_zeroes": true, 00:36:22.576 "zcopy": false, 00:36:22.576 "get_zone_info": false, 00:36:22.576 "zone_management": false, 00:36:22.576 "zone_append": false, 00:36:22.576 "compare": false, 00:36:22.576 "compare_and_write": false, 00:36:22.576 "abort": false, 00:36:22.576 "seek_hole": false, 00:36:22.576 "seek_data": false, 00:36:22.576 "copy": false, 00:36:22.576 "nvme_iov_md": false 00:36:22.576 }, 00:36:22.576 "memory_domains": [ 00:36:22.576 { 00:36:22.576 "dma_device_id": "system", 00:36:22.576 "dma_device_type": 1 00:36:22.576 }, 00:36:22.576 { 00:36:22.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.576 "dma_device_type": 2 00:36:22.576 }, 00:36:22.576 { 00:36:22.576 "dma_device_id": "system", 00:36:22.576 "dma_device_type": 1 00:36:22.576 }, 00:36:22.576 { 00:36:22.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:22.576 "dma_device_type": 2 00:36:22.576 } 00:36:22.576 ], 00:36:22.576 "driver_specific": { 00:36:22.576 "raid": { 00:36:22.576 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:22.576 "strip_size_kb": 0, 00:36:22.576 "state": "online", 00:36:22.576 "raid_level": "raid1", 00:36:22.576 "superblock": true, 00:36:22.576 "num_base_bdevs": 2, 00:36:22.576 "num_base_bdevs_discovered": 2, 00:36:22.576 "num_base_bdevs_operational": 2, 00:36:22.576 "base_bdevs_list": [ 00:36:22.576 { 00:36:22.576 "name": "pt1", 00:36:22.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:22.576 "is_configured": true, 00:36:22.576 "data_offset": 256, 00:36:22.576 "data_size": 7936 00:36:22.576 }, 00:36:22.576 { 00:36:22.576 "name": "pt2", 00:36:22.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:22.576 "is_configured": true, 00:36:22.576 "data_offset": 256, 00:36:22.576 "data_size": 7936 00:36:22.576 } 00:36:22.576 ] 00:36:22.576 } 00:36:22.576 } 00:36:22.576 }' 00:36:22.576 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:22.576 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:22.576 pt2' 00:36:22.576 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.835 [2024-12-06 18:34:53.631511] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 3da62151-9e21-4eeb-bd29-f90010341959 '!=' 3da62151-9e21-4eeb-bd29-f90010341959 ']' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.835 [2024-12-06 18:34:53.671291] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:22.835 "name": "raid_bdev1", 00:36:22.835 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:22.835 "strip_size_kb": 0, 00:36:22.835 "state": "online", 00:36:22.835 "raid_level": "raid1", 00:36:22.835 "superblock": true, 00:36:22.835 "num_base_bdevs": 2, 00:36:22.835 "num_base_bdevs_discovered": 1, 00:36:22.835 "num_base_bdevs_operational": 1, 00:36:22.835 "base_bdevs_list": [ 00:36:22.835 { 00:36:22.835 "name": null, 00:36:22.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:22.835 "is_configured": false, 00:36:22.835 "data_offset": 0, 00:36:22.835 "data_size": 7936 00:36:22.835 }, 00:36:22.835 { 00:36:22.835 "name": "pt2", 00:36:22.835 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:22.835 "is_configured": true, 00:36:22.835 "data_offset": 256, 00:36:22.835 "data_size": 7936 00:36:22.835 } 00:36:22.835 ] 00:36:22.835 }' 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:22.835 18:34:53 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.403 [2024-12-06 18:34:54.106732] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:23.403 [2024-12-06 18:34:54.106760] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:23.403 [2024-12-06 18:34:54.106818] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:23.403 [2024-12-06 18:34:54.106855] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:23.403 [2024-12-06 18:34:54.106867] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.403 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.403 [2024-12-06 18:34:54.162729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:23.403 [2024-12-06 18:34:54.162782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:23.403 [2024-12-06 18:34:54.162797] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:23.403 [2024-12-06 18:34:54.162810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:23.403 [2024-12-06 18:34:54.165184] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:23.403 [2024-12-06 18:34:54.165224] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:23.403 [2024-12-06 18:34:54.165290] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:23.403 [2024-12-06 18:34:54.165334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:23.403 [2024-12-06 18:34:54.165425] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:23.403 [2024-12-06 18:34:54.165440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:23.403 [2024-12-06 18:34:54.165657] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:23.403 [2024-12-06 18:34:54.165816] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:23.403 [2024-12-06 18:34:54.165833] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:36:23.403 [2024-12-06 18:34:54.165956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:23.403 pt2 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.404 "name": "raid_bdev1", 00:36:23.404 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:23.404 "strip_size_kb": 0, 00:36:23.404 "state": "online", 00:36:23.404 "raid_level": "raid1", 00:36:23.404 "superblock": true, 00:36:23.404 "num_base_bdevs": 2, 00:36:23.404 "num_base_bdevs_discovered": 1, 00:36:23.404 "num_base_bdevs_operational": 1, 00:36:23.404 "base_bdevs_list": [ 00:36:23.404 { 00:36:23.404 "name": null, 00:36:23.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.404 "is_configured": false, 00:36:23.404 "data_offset": 256, 00:36:23.404 "data_size": 7936 00:36:23.404 }, 00:36:23.404 { 00:36:23.404 "name": "pt2", 00:36:23.404 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:23.404 "is_configured": true, 00:36:23.404 "data_offset": 256, 00:36:23.404 "data_size": 7936 00:36:23.404 } 00:36:23.404 ] 00:36:23.404 }' 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.404 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.662 [2024-12-06 18:34:54.578699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:23.662 [2024-12-06 18:34:54.578728] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:23.662 [2024-12-06 18:34:54.578772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:23.662 [2024-12-06 18:34:54.578811] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:23.662 [2024-12-06 18:34:54.578821] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.662 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.921 [2024-12-06 18:34:54.634737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:23.921 [2024-12-06 18:34:54.634784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:23.921 [2024-12-06 18:34:54.634801] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:36:23.921 [2024-12-06 18:34:54.634811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:23.921 [2024-12-06 18:34:54.637133] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:23.921 [2024-12-06 18:34:54.637180] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:23.921 [2024-12-06 18:34:54.637242] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:23.921 [2024-12-06 18:34:54.637278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:23.921 [2024-12-06 18:34:54.637398] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:23.921 [2024-12-06 18:34:54.637410] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:23.921 [2024-12-06 18:34:54.637425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:36:23.921 [2024-12-06 18:34:54.637472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:23.921 [2024-12-06 18:34:54.637529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:36:23.921 [2024-12-06 18:34:54.637538] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:23.921 [2024-12-06 18:34:54.637766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:23.921 [2024-12-06 18:34:54.637893] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:36:23.921 [2024-12-06 18:34:54.637906] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:36:23.921 [2024-12-06 18:34:54.638026] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:23.921 pt1 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:23.921 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:23.922 "name": "raid_bdev1", 00:36:23.922 "uuid": "3da62151-9e21-4eeb-bd29-f90010341959", 00:36:23.922 "strip_size_kb": 0, 00:36:23.922 "state": "online", 00:36:23.922 "raid_level": "raid1", 00:36:23.922 "superblock": true, 00:36:23.922 "num_base_bdevs": 2, 00:36:23.922 "num_base_bdevs_discovered": 1, 00:36:23.922 "num_base_bdevs_operational": 1, 00:36:23.922 "base_bdevs_list": [ 00:36:23.922 { 00:36:23.922 "name": null, 00:36:23.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:23.922 "is_configured": false, 00:36:23.922 "data_offset": 256, 00:36:23.922 "data_size": 7936 00:36:23.922 }, 00:36:23.922 { 00:36:23.922 "name": "pt2", 00:36:23.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:23.922 "is_configured": true, 00:36:23.922 "data_offset": 256, 00:36:23.922 "data_size": 7936 00:36:23.922 } 00:36:23.922 ] 00:36:23.922 }' 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:23.922 18:34:54 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:36:24.183 [2024-12-06 18:34:55.098938] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:24.183 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 3da62151-9e21-4eeb-bd29-f90010341959 '!=' 3da62151-9e21-4eeb-bd29-f90010341959 ']' 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85895 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85895 ']' 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85895 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85895 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:24.442 killing process with pid 85895 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85895' 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85895 00:36:24.442 [2024-12-06 18:34:55.175922] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:24.442 [2024-12-06 18:34:55.175992] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:24.442 [2024-12-06 18:34:55.176031] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:24.442 [2024-12-06 18:34:55.176048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:36:24.442 18:34:55 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85895 00:36:24.442 [2024-12-06 18:34:55.372612] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:25.838 18:34:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:36:25.838 00:36:25.838 real 0m5.739s 00:36:25.838 user 0m8.538s 00:36:25.838 sys 0m1.276s 00:36:25.838 18:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:25.838 ************************************ 00:36:25.838 END TEST raid_superblock_test_4k 00:36:25.838 ************************************ 00:36:25.838 18:34:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:36:25.838 18:34:56 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:36:25.838 18:34:56 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:36:25.838 18:34:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:36:25.838 18:34:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:25.838 18:34:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:25.838 ************************************ 00:36:25.838 START TEST raid_rebuild_test_sb_4k 00:36:25.838 ************************************ 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86218 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86218 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86218 ']' 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:25.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:25.838 18:34:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:25.838 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:25.838 Zero copy mechanism will not be used. 00:36:25.838 [2024-12-06 18:34:56.660264] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:25.838 [2024-12-06 18:34:56.660403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86218 ] 00:36:26.098 [2024-12-06 18:34:56.848210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.098 [2024-12-06 18:34:56.954930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.357 [2024-12-06 18:34:57.161659] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:26.357 [2024-12-06 18:34:57.161864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.617 BaseBdev1_malloc 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.617 [2024-12-06 18:34:57.525677] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:26.617 [2024-12-06 18:34:57.525914] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:26.617 [2024-12-06 18:34:57.525971] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:26.617 [2024-12-06 18:34:57.526069] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:26.617 [2024-12-06 18:34:57.528364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:26.617 [2024-12-06 18:34:57.528524] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:26.617 BaseBdev1 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.617 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.893 BaseBdev2_malloc 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.893 [2024-12-06 18:34:57.580433] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:26.893 [2024-12-06 18:34:57.580633] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:26.893 [2024-12-06 18:34:57.580694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:26.893 [2024-12-06 18:34:57.580783] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:26.893 [2024-12-06 18:34:57.583204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:26.893 [2024-12-06 18:34:57.583357] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:26.893 BaseBdev2 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.893 spare_malloc 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.893 spare_delay 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.893 [2024-12-06 18:34:57.676584] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:26.893 [2024-12-06 18:34:57.676803] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:26.893 [2024-12-06 18:34:57.676832] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:36:26.893 [2024-12-06 18:34:57.676848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:26.893 [2024-12-06 18:34:57.679311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:26.893 [2024-12-06 18:34:57.679355] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:26.893 spare 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.893 [2024-12-06 18:34:57.688626] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:26.893 [2024-12-06 18:34:57.690771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:26.893 [2024-12-06 18:34:57.690962] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:26.893 [2024-12-06 18:34:57.690979] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:26.893 [2024-12-06 18:34:57.691243] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:26.893 [2024-12-06 18:34:57.691420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:26.893 [2024-12-06 18:34:57.691431] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:26.893 [2024-12-06 18:34:57.691565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:26.893 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:26.894 "name": "raid_bdev1", 00:36:26.894 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:26.894 "strip_size_kb": 0, 00:36:26.894 "state": "online", 00:36:26.894 "raid_level": "raid1", 00:36:26.894 "superblock": true, 00:36:26.894 "num_base_bdevs": 2, 00:36:26.894 "num_base_bdevs_discovered": 2, 00:36:26.894 "num_base_bdevs_operational": 2, 00:36:26.894 "base_bdevs_list": [ 00:36:26.894 { 00:36:26.894 "name": "BaseBdev1", 00:36:26.894 "uuid": "be3d9f76-adc2-5c70-906d-c1fa27bdac08", 00:36:26.894 "is_configured": true, 00:36:26.894 "data_offset": 256, 00:36:26.894 "data_size": 7936 00:36:26.894 }, 00:36:26.894 { 00:36:26.894 "name": "BaseBdev2", 00:36:26.894 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:26.894 "is_configured": true, 00:36:26.894 "data_offset": 256, 00:36:26.894 "data_size": 7936 00:36:26.894 } 00:36:26.894 ] 00:36:26.894 }' 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:26.894 18:34:57 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:27.461 [2024-12-06 18:34:58.124223] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:27.461 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:27.461 [2024-12-06 18:34:58.399619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:27.721 /dev/nbd0 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:27.721 1+0 records in 00:36:27.721 1+0 records out 00:36:27.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382893 s, 10.7 MB/s 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:36:27.721 18:34:58 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:36:28.290 7936+0 records in 00:36:28.290 7936+0 records out 00:36:28.290 32505856 bytes (33 MB, 31 MiB) copied, 0.722138 s, 45.0 MB/s 00:36:28.290 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:36:28.290 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:28.290 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:28.290 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:28.290 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:36:28.290 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:28.290 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:28.549 [2024-12-06 18:34:59.419615] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:28.549 [2024-12-06 18:34:59.431707] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:28.549 "name": "raid_bdev1", 00:36:28.549 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:28.549 "strip_size_kb": 0, 00:36:28.549 "state": "online", 00:36:28.549 "raid_level": "raid1", 00:36:28.549 "superblock": true, 00:36:28.549 "num_base_bdevs": 2, 00:36:28.549 "num_base_bdevs_discovered": 1, 00:36:28.549 "num_base_bdevs_operational": 1, 00:36:28.549 "base_bdevs_list": [ 00:36:28.549 { 00:36:28.549 "name": null, 00:36:28.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:28.549 "is_configured": false, 00:36:28.549 "data_offset": 0, 00:36:28.549 "data_size": 7936 00:36:28.549 }, 00:36:28.549 { 00:36:28.549 "name": "BaseBdev2", 00:36:28.549 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:28.549 "is_configured": true, 00:36:28.549 "data_offset": 256, 00:36:28.549 "data_size": 7936 00:36:28.549 } 00:36:28.549 ] 00:36:28.549 }' 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:28.549 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:29.115 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:29.115 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.115 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:29.115 [2024-12-06 18:34:59.835253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:29.115 [2024-12-06 18:34:59.855409] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:36:29.115 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.115 18:34:59 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:36:29.115 [2024-12-06 18:34:59.857814] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:30.049 "name": "raid_bdev1", 00:36:30.049 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:30.049 "strip_size_kb": 0, 00:36:30.049 "state": "online", 00:36:30.049 "raid_level": "raid1", 00:36:30.049 "superblock": true, 00:36:30.049 "num_base_bdevs": 2, 00:36:30.049 "num_base_bdevs_discovered": 2, 00:36:30.049 "num_base_bdevs_operational": 2, 00:36:30.049 "process": { 00:36:30.049 "type": "rebuild", 00:36:30.049 "target": "spare", 00:36:30.049 "progress": { 00:36:30.049 "blocks": 2560, 00:36:30.049 "percent": 32 00:36:30.049 } 00:36:30.049 }, 00:36:30.049 "base_bdevs_list": [ 00:36:30.049 { 00:36:30.049 "name": "spare", 00:36:30.049 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:30.049 "is_configured": true, 00:36:30.049 "data_offset": 256, 00:36:30.049 "data_size": 7936 00:36:30.049 }, 00:36:30.049 { 00:36:30.049 "name": "BaseBdev2", 00:36:30.049 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:30.049 "is_configured": true, 00:36:30.049 "data_offset": 256, 00:36:30.049 "data_size": 7936 00:36:30.049 } 00:36:30.049 ] 00:36:30.049 }' 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:30.049 18:35:00 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:30.314 [2024-12-06 18:35:01.005398] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:30.314 [2024-12-06 18:35:01.067745] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:30.314 [2024-12-06 18:35:01.067821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:30.314 [2024-12-06 18:35:01.067839] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:30.314 [2024-12-06 18:35:01.067853] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:30.314 "name": "raid_bdev1", 00:36:30.314 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:30.314 "strip_size_kb": 0, 00:36:30.314 "state": "online", 00:36:30.314 "raid_level": "raid1", 00:36:30.314 "superblock": true, 00:36:30.314 "num_base_bdevs": 2, 00:36:30.314 "num_base_bdevs_discovered": 1, 00:36:30.314 "num_base_bdevs_operational": 1, 00:36:30.314 "base_bdevs_list": [ 00:36:30.314 { 00:36:30.314 "name": null, 00:36:30.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.314 "is_configured": false, 00:36:30.314 "data_offset": 0, 00:36:30.314 "data_size": 7936 00:36:30.314 }, 00:36:30.314 { 00:36:30.314 "name": "BaseBdev2", 00:36:30.314 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:30.314 "is_configured": true, 00:36:30.314 "data_offset": 256, 00:36:30.314 "data_size": 7936 00:36:30.314 } 00:36:30.314 ] 00:36:30.314 }' 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:30.314 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:30.882 "name": "raid_bdev1", 00:36:30.882 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:30.882 "strip_size_kb": 0, 00:36:30.882 "state": "online", 00:36:30.882 "raid_level": "raid1", 00:36:30.882 "superblock": true, 00:36:30.882 "num_base_bdevs": 2, 00:36:30.882 "num_base_bdevs_discovered": 1, 00:36:30.882 "num_base_bdevs_operational": 1, 00:36:30.882 "base_bdevs_list": [ 00:36:30.882 { 00:36:30.882 "name": null, 00:36:30.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:30.882 "is_configured": false, 00:36:30.882 "data_offset": 0, 00:36:30.882 "data_size": 7936 00:36:30.882 }, 00:36:30.882 { 00:36:30.882 "name": "BaseBdev2", 00:36:30.882 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:30.882 "is_configured": true, 00:36:30.882 "data_offset": 256, 00:36:30.882 "data_size": 7936 00:36:30.882 } 00:36:30.882 ] 00:36:30.882 }' 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:30.882 [2024-12-06 18:35:01.693025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:30.882 [2024-12-06 18:35:01.711196] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:30.882 18:35:01 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:36:30.882 [2024-12-06 18:35:01.713664] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:31.816 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:32.075 "name": "raid_bdev1", 00:36:32.075 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:32.075 "strip_size_kb": 0, 00:36:32.075 "state": "online", 00:36:32.075 "raid_level": "raid1", 00:36:32.075 "superblock": true, 00:36:32.075 "num_base_bdevs": 2, 00:36:32.075 "num_base_bdevs_discovered": 2, 00:36:32.075 "num_base_bdevs_operational": 2, 00:36:32.075 "process": { 00:36:32.075 "type": "rebuild", 00:36:32.075 "target": "spare", 00:36:32.075 "progress": { 00:36:32.075 "blocks": 2560, 00:36:32.075 "percent": 32 00:36:32.075 } 00:36:32.075 }, 00:36:32.075 "base_bdevs_list": [ 00:36:32.075 { 00:36:32.075 "name": "spare", 00:36:32.075 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:32.075 "is_configured": true, 00:36:32.075 "data_offset": 256, 00:36:32.075 "data_size": 7936 00:36:32.075 }, 00:36:32.075 { 00:36:32.075 "name": "BaseBdev2", 00:36:32.075 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:32.075 "is_configured": true, 00:36:32.075 "data_offset": 256, 00:36:32.075 "data_size": 7936 00:36:32.075 } 00:36:32.075 ] 00:36:32.075 }' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:36:32.075 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=679 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:32.075 "name": "raid_bdev1", 00:36:32.075 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:32.075 "strip_size_kb": 0, 00:36:32.075 "state": "online", 00:36:32.075 "raid_level": "raid1", 00:36:32.075 "superblock": true, 00:36:32.075 "num_base_bdevs": 2, 00:36:32.075 "num_base_bdevs_discovered": 2, 00:36:32.075 "num_base_bdevs_operational": 2, 00:36:32.075 "process": { 00:36:32.075 "type": "rebuild", 00:36:32.075 "target": "spare", 00:36:32.075 "progress": { 00:36:32.075 "blocks": 2816, 00:36:32.075 "percent": 35 00:36:32.075 } 00:36:32.075 }, 00:36:32.075 "base_bdevs_list": [ 00:36:32.075 { 00:36:32.075 "name": "spare", 00:36:32.075 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:32.075 "is_configured": true, 00:36:32.075 "data_offset": 256, 00:36:32.075 "data_size": 7936 00:36:32.075 }, 00:36:32.075 { 00:36:32.075 "name": "BaseBdev2", 00:36:32.075 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:32.075 "is_configured": true, 00:36:32.075 "data_offset": 256, 00:36:32.075 "data_size": 7936 00:36:32.075 } 00:36:32.075 ] 00:36:32.075 }' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:32.075 18:35:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:33.479 18:35:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:33.479 18:35:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:33.479 18:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:33.479 "name": "raid_bdev1", 00:36:33.479 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:33.479 "strip_size_kb": 0, 00:36:33.479 "state": "online", 00:36:33.479 "raid_level": "raid1", 00:36:33.479 "superblock": true, 00:36:33.479 "num_base_bdevs": 2, 00:36:33.479 "num_base_bdevs_discovered": 2, 00:36:33.479 "num_base_bdevs_operational": 2, 00:36:33.479 "process": { 00:36:33.479 "type": "rebuild", 00:36:33.479 "target": "spare", 00:36:33.479 "progress": { 00:36:33.479 "blocks": 5632, 00:36:33.479 "percent": 70 00:36:33.479 } 00:36:33.479 }, 00:36:33.479 "base_bdevs_list": [ 00:36:33.479 { 00:36:33.479 "name": "spare", 00:36:33.479 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:33.479 "is_configured": true, 00:36:33.479 "data_offset": 256, 00:36:33.479 "data_size": 7936 00:36:33.479 }, 00:36:33.479 { 00:36:33.479 "name": "BaseBdev2", 00:36:33.479 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:33.479 "is_configured": true, 00:36:33.479 "data_offset": 256, 00:36:33.479 "data_size": 7936 00:36:33.479 } 00:36:33.479 ] 00:36:33.479 }' 00:36:33.479 18:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:33.479 18:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:33.479 18:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:33.479 18:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:33.479 18:35:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:36:34.046 [2024-12-06 18:35:04.838025] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:36:34.046 [2024-12-06 18:35:04.838111] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:36:34.046 [2024-12-06 18:35:04.838265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:34.305 "name": "raid_bdev1", 00:36:34.305 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:34.305 "strip_size_kb": 0, 00:36:34.305 "state": "online", 00:36:34.305 "raid_level": "raid1", 00:36:34.305 "superblock": true, 00:36:34.305 "num_base_bdevs": 2, 00:36:34.305 "num_base_bdevs_discovered": 2, 00:36:34.305 "num_base_bdevs_operational": 2, 00:36:34.305 "base_bdevs_list": [ 00:36:34.305 { 00:36:34.305 "name": "spare", 00:36:34.305 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:34.305 "is_configured": true, 00:36:34.305 "data_offset": 256, 00:36:34.305 "data_size": 7936 00:36:34.305 }, 00:36:34.305 { 00:36:34.305 "name": "BaseBdev2", 00:36:34.305 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:34.305 "is_configured": true, 00:36:34.305 "data_offset": 256, 00:36:34.305 "data_size": 7936 00:36:34.305 } 00:36:34.305 ] 00:36:34.305 }' 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.305 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:34.562 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.562 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:34.562 "name": "raid_bdev1", 00:36:34.562 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:34.562 "strip_size_kb": 0, 00:36:34.562 "state": "online", 00:36:34.562 "raid_level": "raid1", 00:36:34.562 "superblock": true, 00:36:34.562 "num_base_bdevs": 2, 00:36:34.562 "num_base_bdevs_discovered": 2, 00:36:34.562 "num_base_bdevs_operational": 2, 00:36:34.562 "base_bdevs_list": [ 00:36:34.562 { 00:36:34.562 "name": "spare", 00:36:34.562 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:34.562 "is_configured": true, 00:36:34.562 "data_offset": 256, 00:36:34.562 "data_size": 7936 00:36:34.562 }, 00:36:34.562 { 00:36:34.563 "name": "BaseBdev2", 00:36:34.563 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:34.563 "is_configured": true, 00:36:34.563 "data_offset": 256, 00:36:34.563 "data_size": 7936 00:36:34.563 } 00:36:34.563 ] 00:36:34.563 }' 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:34.563 "name": "raid_bdev1", 00:36:34.563 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:34.563 "strip_size_kb": 0, 00:36:34.563 "state": "online", 00:36:34.563 "raid_level": "raid1", 00:36:34.563 "superblock": true, 00:36:34.563 "num_base_bdevs": 2, 00:36:34.563 "num_base_bdevs_discovered": 2, 00:36:34.563 "num_base_bdevs_operational": 2, 00:36:34.563 "base_bdevs_list": [ 00:36:34.563 { 00:36:34.563 "name": "spare", 00:36:34.563 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:34.563 "is_configured": true, 00:36:34.563 "data_offset": 256, 00:36:34.563 "data_size": 7936 00:36:34.563 }, 00:36:34.563 { 00:36:34.563 "name": "BaseBdev2", 00:36:34.563 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:34.563 "is_configured": true, 00:36:34.563 "data_offset": 256, 00:36:34.563 "data_size": 7936 00:36:34.563 } 00:36:34.563 ] 00:36:34.563 }' 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:34.563 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:34.821 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:34.821 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.821 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:34.821 [2024-12-06 18:35:05.735562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:34.821 [2024-12-06 18:35:05.735601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:34.822 [2024-12-06 18:35:05.735713] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:34.822 [2024-12-06 18:35:05.735801] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:34.822 [2024-12-06 18:35:05.735817] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:34.822 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:34.822 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:34.822 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:36:34.822 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:34.822 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:34.822 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:35.079 18:35:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:36:35.079 /dev/nbd0 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:35.337 1+0 records in 00:36:35.337 1+0 records out 00:36:35.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027644 s, 14.8 MB/s 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:35.337 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:36:35.337 /dev/nbd1 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:35.595 1+0 records in 00:36:35.595 1+0 records out 00:36:35.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426561 s, 9.6 MB/s 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:35.595 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:35.853 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.112 18:35:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.112 [2024-12-06 18:35:06.996702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:36.112 [2024-12-06 18:35:06.996767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:36.112 [2024-12-06 18:35:06.996799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:36:36.112 [2024-12-06 18:35:06.996811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:36.112 [2024-12-06 18:35:06.999704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:36.112 [2024-12-06 18:35:06.999749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:36.112 [2024-12-06 18:35:06.999854] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:36.112 [2024-12-06 18:35:06.999913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:36.112 [2024-12-06 18:35:07.000082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:36.112 spare 00:36:36.112 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.112 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:36:36.112 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.112 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.371 [2024-12-06 18:35:07.100036] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:36:36.372 [2024-12-06 18:35:07.100071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:36.372 [2024-12-06 18:35:07.100369] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:36:36.372 [2024-12-06 18:35:07.100581] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:36:36.372 [2024-12-06 18:35:07.100602] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:36:36.372 [2024-12-06 18:35:07.100793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:36.372 "name": "raid_bdev1", 00:36:36.372 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:36.372 "strip_size_kb": 0, 00:36:36.372 "state": "online", 00:36:36.372 "raid_level": "raid1", 00:36:36.372 "superblock": true, 00:36:36.372 "num_base_bdevs": 2, 00:36:36.372 "num_base_bdevs_discovered": 2, 00:36:36.372 "num_base_bdevs_operational": 2, 00:36:36.372 "base_bdevs_list": [ 00:36:36.372 { 00:36:36.372 "name": "spare", 00:36:36.372 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:36.372 "is_configured": true, 00:36:36.372 "data_offset": 256, 00:36:36.372 "data_size": 7936 00:36:36.372 }, 00:36:36.372 { 00:36:36.372 "name": "BaseBdev2", 00:36:36.372 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:36.372 "is_configured": true, 00:36:36.372 "data_offset": 256, 00:36:36.372 "data_size": 7936 00:36:36.372 } 00:36:36.372 ] 00:36:36.372 }' 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:36.372 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:36.633 "name": "raid_bdev1", 00:36:36.633 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:36.633 "strip_size_kb": 0, 00:36:36.633 "state": "online", 00:36:36.633 "raid_level": "raid1", 00:36:36.633 "superblock": true, 00:36:36.633 "num_base_bdevs": 2, 00:36:36.633 "num_base_bdevs_discovered": 2, 00:36:36.633 "num_base_bdevs_operational": 2, 00:36:36.633 "base_bdevs_list": [ 00:36:36.633 { 00:36:36.633 "name": "spare", 00:36:36.633 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:36.633 "is_configured": true, 00:36:36.633 "data_offset": 256, 00:36:36.633 "data_size": 7936 00:36:36.633 }, 00:36:36.633 { 00:36:36.633 "name": "BaseBdev2", 00:36:36.633 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:36.633 "is_configured": true, 00:36:36.633 "data_offset": 256, 00:36:36.633 "data_size": 7936 00:36:36.633 } 00:36:36.633 ] 00:36:36.633 }' 00:36:36.633 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.926 [2024-12-06 18:35:07.699916] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:36.926 "name": "raid_bdev1", 00:36:36.926 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:36.926 "strip_size_kb": 0, 00:36:36.926 "state": "online", 00:36:36.926 "raid_level": "raid1", 00:36:36.926 "superblock": true, 00:36:36.926 "num_base_bdevs": 2, 00:36:36.926 "num_base_bdevs_discovered": 1, 00:36:36.926 "num_base_bdevs_operational": 1, 00:36:36.926 "base_bdevs_list": [ 00:36:36.926 { 00:36:36.926 "name": null, 00:36:36.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:36.926 "is_configured": false, 00:36:36.926 "data_offset": 0, 00:36:36.926 "data_size": 7936 00:36:36.926 }, 00:36:36.926 { 00:36:36.926 "name": "BaseBdev2", 00:36:36.926 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:36.926 "is_configured": true, 00:36:36.926 "data_offset": 256, 00:36:36.926 "data_size": 7936 00:36:36.926 } 00:36:36.926 ] 00:36:36.926 }' 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:36.926 18:35:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:37.186 18:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:36:37.186 18:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:37.186 18:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:37.446 [2024-12-06 18:35:08.135342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:37.446 [2024-12-06 18:35:08.135544] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:37.446 [2024-12-06 18:35:08.135570] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:37.446 [2024-12-06 18:35:08.135604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:37.446 [2024-12-06 18:35:08.153488] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:36:37.446 18:35:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:37.446 18:35:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:36:37.446 [2024-12-06 18:35:08.156015] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:38.382 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:38.382 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:38.382 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:38.382 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:38.382 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:38.382 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.382 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.382 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:38.383 "name": "raid_bdev1", 00:36:38.383 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:38.383 "strip_size_kb": 0, 00:36:38.383 "state": "online", 00:36:38.383 "raid_level": "raid1", 00:36:38.383 "superblock": true, 00:36:38.383 "num_base_bdevs": 2, 00:36:38.383 "num_base_bdevs_discovered": 2, 00:36:38.383 "num_base_bdevs_operational": 2, 00:36:38.383 "process": { 00:36:38.383 "type": "rebuild", 00:36:38.383 "target": "spare", 00:36:38.383 "progress": { 00:36:38.383 "blocks": 2560, 00:36:38.383 "percent": 32 00:36:38.383 } 00:36:38.383 }, 00:36:38.383 "base_bdevs_list": [ 00:36:38.383 { 00:36:38.383 "name": "spare", 00:36:38.383 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:38.383 "is_configured": true, 00:36:38.383 "data_offset": 256, 00:36:38.383 "data_size": 7936 00:36:38.383 }, 00:36:38.383 { 00:36:38.383 "name": "BaseBdev2", 00:36:38.383 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:38.383 "is_configured": true, 00:36:38.383 "data_offset": 256, 00:36:38.383 "data_size": 7936 00:36:38.383 } 00:36:38.383 ] 00:36:38.383 }' 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.383 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:38.383 [2024-12-06 18:35:09.299669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:38.642 [2024-12-06 18:35:09.365071] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:38.642 [2024-12-06 18:35:09.365137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:38.642 [2024-12-06 18:35:09.365163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:38.642 [2024-12-06 18:35:09.365176] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:38.642 "name": "raid_bdev1", 00:36:38.642 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:38.642 "strip_size_kb": 0, 00:36:38.642 "state": "online", 00:36:38.642 "raid_level": "raid1", 00:36:38.642 "superblock": true, 00:36:38.642 "num_base_bdevs": 2, 00:36:38.642 "num_base_bdevs_discovered": 1, 00:36:38.642 "num_base_bdevs_operational": 1, 00:36:38.642 "base_bdevs_list": [ 00:36:38.642 { 00:36:38.642 "name": null, 00:36:38.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:38.642 "is_configured": false, 00:36:38.642 "data_offset": 0, 00:36:38.642 "data_size": 7936 00:36:38.642 }, 00:36:38.642 { 00:36:38.642 "name": "BaseBdev2", 00:36:38.642 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:38.642 "is_configured": true, 00:36:38.642 "data_offset": 256, 00:36:38.642 "data_size": 7936 00:36:38.642 } 00:36:38.642 ] 00:36:38.642 }' 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:38.642 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:38.901 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:38.901 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.901 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:38.901 [2024-12-06 18:35:09.840294] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:38.901 [2024-12-06 18:35:09.840378] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:38.901 [2024-12-06 18:35:09.840403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:36:38.901 [2024-12-06 18:35:09.840419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:38.901 [2024-12-06 18:35:09.840962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:38.901 [2024-12-06 18:35:09.840998] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:38.901 [2024-12-06 18:35:09.841100] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:36:38.901 [2024-12-06 18:35:09.841118] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:36:38.901 [2024-12-06 18:35:09.841131] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:36:38.901 [2024-12-06 18:35:09.841177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:36:39.160 [2024-12-06 18:35:09.857678] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:36:39.160 spare 00:36:39.160 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:39.160 18:35:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:36:39.160 [2024-12-06 18:35:09.860200] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:40.095 "name": "raid_bdev1", 00:36:40.095 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:40.095 "strip_size_kb": 0, 00:36:40.095 "state": "online", 00:36:40.095 "raid_level": "raid1", 00:36:40.095 "superblock": true, 00:36:40.095 "num_base_bdevs": 2, 00:36:40.095 "num_base_bdevs_discovered": 2, 00:36:40.095 "num_base_bdevs_operational": 2, 00:36:40.095 "process": { 00:36:40.095 "type": "rebuild", 00:36:40.095 "target": "spare", 00:36:40.095 "progress": { 00:36:40.095 "blocks": 2560, 00:36:40.095 "percent": 32 00:36:40.095 } 00:36:40.095 }, 00:36:40.095 "base_bdevs_list": [ 00:36:40.095 { 00:36:40.095 "name": "spare", 00:36:40.095 "uuid": "d8b8ee10-3628-5c7f-b57f-7823f03b3629", 00:36:40.095 "is_configured": true, 00:36:40.095 "data_offset": 256, 00:36:40.095 "data_size": 7936 00:36:40.095 }, 00:36:40.095 { 00:36:40.095 "name": "BaseBdev2", 00:36:40.095 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:40.095 "is_configured": true, 00:36:40.095 "data_offset": 256, 00:36:40.095 "data_size": 7936 00:36:40.095 } 00:36:40.095 ] 00:36:40.095 }' 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.095 18:35:10 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:40.095 [2024-12-06 18:35:10.991934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:40.353 [2024-12-06 18:35:11.069381] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:36:40.353 [2024-12-06 18:35:11.069449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:40.353 [2024-12-06 18:35:11.069471] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:36:40.353 [2024-12-06 18:35:11.069481] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:36:40.353 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.353 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:40.353 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:40.354 "name": "raid_bdev1", 00:36:40.354 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:40.354 "strip_size_kb": 0, 00:36:40.354 "state": "online", 00:36:40.354 "raid_level": "raid1", 00:36:40.354 "superblock": true, 00:36:40.354 "num_base_bdevs": 2, 00:36:40.354 "num_base_bdevs_discovered": 1, 00:36:40.354 "num_base_bdevs_operational": 1, 00:36:40.354 "base_bdevs_list": [ 00:36:40.354 { 00:36:40.354 "name": null, 00:36:40.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.354 "is_configured": false, 00:36:40.354 "data_offset": 0, 00:36:40.354 "data_size": 7936 00:36:40.354 }, 00:36:40.354 { 00:36:40.354 "name": "BaseBdev2", 00:36:40.354 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:40.354 "is_configured": true, 00:36:40.354 "data_offset": 256, 00:36:40.354 "data_size": 7936 00:36:40.354 } 00:36:40.354 ] 00:36:40.354 }' 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:40.354 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:40.919 "name": "raid_bdev1", 00:36:40.919 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:40.919 "strip_size_kb": 0, 00:36:40.919 "state": "online", 00:36:40.919 "raid_level": "raid1", 00:36:40.919 "superblock": true, 00:36:40.919 "num_base_bdevs": 2, 00:36:40.919 "num_base_bdevs_discovered": 1, 00:36:40.919 "num_base_bdevs_operational": 1, 00:36:40.919 "base_bdevs_list": [ 00:36:40.919 { 00:36:40.919 "name": null, 00:36:40.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:40.919 "is_configured": false, 00:36:40.919 "data_offset": 0, 00:36:40.919 "data_size": 7936 00:36:40.919 }, 00:36:40.919 { 00:36:40.919 "name": "BaseBdev2", 00:36:40.919 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:40.919 "is_configured": true, 00:36:40.919 "data_offset": 256, 00:36:40.919 "data_size": 7936 00:36:40.919 } 00:36:40.919 ] 00:36:40.919 }' 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.919 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:40.920 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:40.920 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:40.920 [2024-12-06 18:35:11.718750] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:40.920 [2024-12-06 18:35:11.718961] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:40.920 [2024-12-06 18:35:11.719006] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:36:40.920 [2024-12-06 18:35:11.719033] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:40.920 [2024-12-06 18:35:11.719602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:40.920 [2024-12-06 18:35:11.719627] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:40.920 [2024-12-06 18:35:11.719724] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:36:40.920 [2024-12-06 18:35:11.719741] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:40.920 [2024-12-06 18:35:11.719758] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:40.920 [2024-12-06 18:35:11.719772] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:36:40.920 BaseBdev1 00:36:40.920 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:40.920 18:35:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:41.855 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:41.856 "name": "raid_bdev1", 00:36:41.856 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:41.856 "strip_size_kb": 0, 00:36:41.856 "state": "online", 00:36:41.856 "raid_level": "raid1", 00:36:41.856 "superblock": true, 00:36:41.856 "num_base_bdevs": 2, 00:36:41.856 "num_base_bdevs_discovered": 1, 00:36:41.856 "num_base_bdevs_operational": 1, 00:36:41.856 "base_bdevs_list": [ 00:36:41.856 { 00:36:41.856 "name": null, 00:36:41.856 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:41.856 "is_configured": false, 00:36:41.856 "data_offset": 0, 00:36:41.856 "data_size": 7936 00:36:41.856 }, 00:36:41.856 { 00:36:41.856 "name": "BaseBdev2", 00:36:41.856 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:41.856 "is_configured": true, 00:36:41.856 "data_offset": 256, 00:36:41.856 "data_size": 7936 00:36:41.856 } 00:36:41.856 ] 00:36:41.856 }' 00:36:41.856 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:41.856 18:35:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:42.421 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:42.422 "name": "raid_bdev1", 00:36:42.422 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:42.422 "strip_size_kb": 0, 00:36:42.422 "state": "online", 00:36:42.422 "raid_level": "raid1", 00:36:42.422 "superblock": true, 00:36:42.422 "num_base_bdevs": 2, 00:36:42.422 "num_base_bdevs_discovered": 1, 00:36:42.422 "num_base_bdevs_operational": 1, 00:36:42.422 "base_bdevs_list": [ 00:36:42.422 { 00:36:42.422 "name": null, 00:36:42.422 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:42.422 "is_configured": false, 00:36:42.422 "data_offset": 0, 00:36:42.422 "data_size": 7936 00:36:42.422 }, 00:36:42.422 { 00:36:42.422 "name": "BaseBdev2", 00:36:42.422 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:42.422 "is_configured": true, 00:36:42.422 "data_offset": 256, 00:36:42.422 "data_size": 7936 00:36:42.422 } 00:36:42.422 ] 00:36:42.422 }' 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:42.422 [2024-12-06 18:35:13.322771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:42.422 [2024-12-06 18:35:13.323082] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:36:42.422 [2024-12-06 18:35:13.323230] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:36:42.422 request: 00:36:42.422 { 00:36:42.422 "base_bdev": "BaseBdev1", 00:36:42.422 "raid_bdev": "raid_bdev1", 00:36:42.422 "method": "bdev_raid_add_base_bdev", 00:36:42.422 "req_id": 1 00:36:42.422 } 00:36:42.422 Got JSON-RPC error response 00:36:42.422 response: 00:36:42.422 { 00:36:42.422 "code": -22, 00:36:42.422 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:36:42.422 } 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:42.422 18:35:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:36:43.795 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:43.795 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:43.796 "name": "raid_bdev1", 00:36:43.796 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:43.796 "strip_size_kb": 0, 00:36:43.796 "state": "online", 00:36:43.796 "raid_level": "raid1", 00:36:43.796 "superblock": true, 00:36:43.796 "num_base_bdevs": 2, 00:36:43.796 "num_base_bdevs_discovered": 1, 00:36:43.796 "num_base_bdevs_operational": 1, 00:36:43.796 "base_bdevs_list": [ 00:36:43.796 { 00:36:43.796 "name": null, 00:36:43.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:43.796 "is_configured": false, 00:36:43.796 "data_offset": 0, 00:36:43.796 "data_size": 7936 00:36:43.796 }, 00:36:43.796 { 00:36:43.796 "name": "BaseBdev2", 00:36:43.796 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:43.796 "is_configured": true, 00:36:43.796 "data_offset": 256, 00:36:43.796 "data_size": 7936 00:36:43.796 } 00:36:43.796 ] 00:36:43.796 }' 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:43.796 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:36:44.054 "name": "raid_bdev1", 00:36:44.054 "uuid": "dc0b4280-8d0f-49e8-a22c-148a1453d426", 00:36:44.054 "strip_size_kb": 0, 00:36:44.054 "state": "online", 00:36:44.054 "raid_level": "raid1", 00:36:44.054 "superblock": true, 00:36:44.054 "num_base_bdevs": 2, 00:36:44.054 "num_base_bdevs_discovered": 1, 00:36:44.054 "num_base_bdevs_operational": 1, 00:36:44.054 "base_bdevs_list": [ 00:36:44.054 { 00:36:44.054 "name": null, 00:36:44.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:44.054 "is_configured": false, 00:36:44.054 "data_offset": 0, 00:36:44.054 "data_size": 7936 00:36:44.054 }, 00:36:44.054 { 00:36:44.054 "name": "BaseBdev2", 00:36:44.054 "uuid": "ff7486f0-2b70-569e-bf12-8dd706ce1c20", 00:36:44.054 "is_configured": true, 00:36:44.054 "data_offset": 256, 00:36:44.054 "data_size": 7936 00:36:44.054 } 00:36:44.054 ] 00:36:44.054 }' 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86218 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86218 ']' 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86218 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86218 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:44.054 killing process with pid 86218 00:36:44.054 Received shutdown signal, test time was about 60.000000 seconds 00:36:44.054 00:36:44.054 Latency(us) 00:36:44.054 [2024-12-06T18:35:15.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.054 [2024-12-06T18:35:15.003Z] =================================================================================================================== 00:36:44.054 [2024-12-06T18:35:15.003Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86218' 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86218 00:36:44.054 [2024-12-06 18:35:14.945784] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:44.054 18:35:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86218 00:36:44.054 [2024-12-06 18:35:14.945948] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:44.054 [2024-12-06 18:35:14.946014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:44.054 [2024-12-06 18:35:14.946030] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:36:44.620 [2024-12-06 18:35:15.278213] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:45.995 18:35:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:36:45.995 00:36:45.995 real 0m19.967s 00:36:45.995 user 0m25.651s 00:36:45.995 sys 0m3.025s 00:36:45.995 18:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:45.995 ************************************ 00:36:45.995 END TEST raid_rebuild_test_sb_4k 00:36:45.995 ************************************ 00:36:45.995 18:35:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:36:45.995 18:35:16 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:36:45.995 18:35:16 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:36:45.995 18:35:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:36:45.995 18:35:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:45.995 18:35:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:45.995 ************************************ 00:36:45.995 START TEST raid_state_function_test_sb_md_separate 00:36:45.995 ************************************ 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86905 00:36:45.995 Process raid pid: 86905 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86905' 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86905 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86905 ']' 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:45.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:45.995 18:35:16 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:45.995 [2024-12-06 18:35:16.700653] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:45.995 [2024-12-06 18:35:16.700790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:45.995 [2024-12-06 18:35:16.887910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:46.254 [2024-12-06 18:35:17.027461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:46.523 [2024-12-06 18:35:17.264291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:46.523 [2024-12-06 18:35:17.264340] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:46.782 [2024-12-06 18:35:17.527888] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:46.782 [2024-12-06 18:35:17.528096] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:46.782 [2024-12-06 18:35:17.528200] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:46.782 [2024-12-06 18:35:17.528247] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:46.782 "name": "Existed_Raid", 00:36:46.782 "uuid": "7a07851f-4d16-491a-ab78-1adf428e0c75", 00:36:46.782 "strip_size_kb": 0, 00:36:46.782 "state": "configuring", 00:36:46.782 "raid_level": "raid1", 00:36:46.782 "superblock": true, 00:36:46.782 "num_base_bdevs": 2, 00:36:46.782 "num_base_bdevs_discovered": 0, 00:36:46.782 "num_base_bdevs_operational": 2, 00:36:46.782 "base_bdevs_list": [ 00:36:46.782 { 00:36:46.782 "name": "BaseBdev1", 00:36:46.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.782 "is_configured": false, 00:36:46.782 "data_offset": 0, 00:36:46.782 "data_size": 0 00:36:46.782 }, 00:36:46.782 { 00:36:46.782 "name": "BaseBdev2", 00:36:46.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:46.782 "is_configured": false, 00:36:46.782 "data_offset": 0, 00:36:46.782 "data_size": 0 00:36:46.782 } 00:36:46.782 ] 00:36:46.782 }' 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:46.782 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.041 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:47.041 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.041 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.041 [2024-12-06 18:35:17.975637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:47.041 [2024-12-06 18:35:17.975793] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:36:47.041 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.041 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:36:47.041 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.041 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.041 [2024-12-06 18:35:17.987609] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:36:47.041 [2024-12-06 18:35:17.987770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:36:47.041 [2024-12-06 18:35:17.987855] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:47.041 [2024-12-06 18:35:17.987904] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:47.300 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.300 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:36:47.300 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.300 18:35:17 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.300 [2024-12-06 18:35:18.041948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:47.300 BaseBdev1 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.300 [ 00:36:47.300 { 00:36:47.300 "name": "BaseBdev1", 00:36:47.300 "aliases": [ 00:36:47.300 "1ee76cc8-0254-40e0-b829-a4ae84ae1333" 00:36:47.300 ], 00:36:47.300 "product_name": "Malloc disk", 00:36:47.300 "block_size": 4096, 00:36:47.300 "num_blocks": 8192, 00:36:47.300 "uuid": "1ee76cc8-0254-40e0-b829-a4ae84ae1333", 00:36:47.300 "md_size": 32, 00:36:47.300 "md_interleave": false, 00:36:47.300 "dif_type": 0, 00:36:47.300 "assigned_rate_limits": { 00:36:47.300 "rw_ios_per_sec": 0, 00:36:47.300 "rw_mbytes_per_sec": 0, 00:36:47.300 "r_mbytes_per_sec": 0, 00:36:47.300 "w_mbytes_per_sec": 0 00:36:47.300 }, 00:36:47.300 "claimed": true, 00:36:47.300 "claim_type": "exclusive_write", 00:36:47.300 "zoned": false, 00:36:47.300 "supported_io_types": { 00:36:47.300 "read": true, 00:36:47.300 "write": true, 00:36:47.300 "unmap": true, 00:36:47.300 "flush": true, 00:36:47.300 "reset": true, 00:36:47.300 "nvme_admin": false, 00:36:47.300 "nvme_io": false, 00:36:47.300 "nvme_io_md": false, 00:36:47.300 "write_zeroes": true, 00:36:47.300 "zcopy": true, 00:36:47.300 "get_zone_info": false, 00:36:47.300 "zone_management": false, 00:36:47.300 "zone_append": false, 00:36:47.300 "compare": false, 00:36:47.300 "compare_and_write": false, 00:36:47.300 "abort": true, 00:36:47.300 "seek_hole": false, 00:36:47.300 "seek_data": false, 00:36:47.300 "copy": true, 00:36:47.300 "nvme_iov_md": false 00:36:47.300 }, 00:36:47.300 "memory_domains": [ 00:36:47.300 { 00:36:47.300 "dma_device_id": "system", 00:36:47.300 "dma_device_type": 1 00:36:47.300 }, 00:36:47.300 { 00:36:47.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:47.300 "dma_device_type": 2 00:36:47.300 } 00:36:47.300 ], 00:36:47.300 "driver_specific": {} 00:36:47.300 } 00:36:47.300 ] 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.300 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:47.300 "name": "Existed_Raid", 00:36:47.300 "uuid": "ede58932-03af-4fad-94c2-be712e84a14e", 00:36:47.300 "strip_size_kb": 0, 00:36:47.300 "state": "configuring", 00:36:47.300 "raid_level": "raid1", 00:36:47.300 "superblock": true, 00:36:47.300 "num_base_bdevs": 2, 00:36:47.300 "num_base_bdevs_discovered": 1, 00:36:47.300 "num_base_bdevs_operational": 2, 00:36:47.300 "base_bdevs_list": [ 00:36:47.300 { 00:36:47.300 "name": "BaseBdev1", 00:36:47.300 "uuid": "1ee76cc8-0254-40e0-b829-a4ae84ae1333", 00:36:47.300 "is_configured": true, 00:36:47.300 "data_offset": 256, 00:36:47.300 "data_size": 7936 00:36:47.300 }, 00:36:47.300 { 00:36:47.300 "name": "BaseBdev2", 00:36:47.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.300 "is_configured": false, 00:36:47.300 "data_offset": 0, 00:36:47.300 "data_size": 0 00:36:47.300 } 00:36:47.300 ] 00:36:47.301 }' 00:36:47.301 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:47.301 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.559 [2024-12-06 18:35:18.485355] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:36:47.559 [2024-12-06 18:35:18.485401] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.559 [2024-12-06 18:35:18.497394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:47.559 [2024-12-06 18:35:18.499794] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:36:47.559 [2024-12-06 18:35:18.499842] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:47.559 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:47.560 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:47.560 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:47.560 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:47.560 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:47.818 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:47.818 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:47.818 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.818 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:47.818 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.818 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:47.818 "name": "Existed_Raid", 00:36:47.818 "uuid": "24d1e73f-2acd-4ac4-8b23-11af313d2807", 00:36:47.818 "strip_size_kb": 0, 00:36:47.818 "state": "configuring", 00:36:47.818 "raid_level": "raid1", 00:36:47.818 "superblock": true, 00:36:47.818 "num_base_bdevs": 2, 00:36:47.818 "num_base_bdevs_discovered": 1, 00:36:47.818 "num_base_bdevs_operational": 2, 00:36:47.818 "base_bdevs_list": [ 00:36:47.818 { 00:36:47.818 "name": "BaseBdev1", 00:36:47.818 "uuid": "1ee76cc8-0254-40e0-b829-a4ae84ae1333", 00:36:47.818 "is_configured": true, 00:36:47.818 "data_offset": 256, 00:36:47.818 "data_size": 7936 00:36:47.818 }, 00:36:47.819 { 00:36:47.819 "name": "BaseBdev2", 00:36:47.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:47.819 "is_configured": false, 00:36:47.819 "data_offset": 0, 00:36:47.819 "data_size": 0 00:36:47.819 } 00:36:47.819 ] 00:36:47.819 }' 00:36:47.819 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:47.819 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.077 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:36:48.077 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.077 18:35:18 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.077 [2024-12-06 18:35:19.001848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:48.077 [2024-12-06 18:35:19.002103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:48.077 [2024-12-06 18:35:19.002124] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:48.077 [2024-12-06 18:35:19.002258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:48.077 [2024-12-06 18:35:19.002409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:48.077 [2024-12-06 18:35:19.002425] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:36:48.077 BaseBdev2 00:36:48.077 [2024-12-06 18:35:19.002531] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.077 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.336 [ 00:36:48.336 { 00:36:48.336 "name": "BaseBdev2", 00:36:48.336 "aliases": [ 00:36:48.336 "44d9e548-4d50-4df6-a8e7-725ebd708191" 00:36:48.336 ], 00:36:48.336 "product_name": "Malloc disk", 00:36:48.336 "block_size": 4096, 00:36:48.336 "num_blocks": 8192, 00:36:48.336 "uuid": "44d9e548-4d50-4df6-a8e7-725ebd708191", 00:36:48.336 "md_size": 32, 00:36:48.336 "md_interleave": false, 00:36:48.336 "dif_type": 0, 00:36:48.336 "assigned_rate_limits": { 00:36:48.336 "rw_ios_per_sec": 0, 00:36:48.336 "rw_mbytes_per_sec": 0, 00:36:48.336 "r_mbytes_per_sec": 0, 00:36:48.336 "w_mbytes_per_sec": 0 00:36:48.336 }, 00:36:48.336 "claimed": true, 00:36:48.336 "claim_type": "exclusive_write", 00:36:48.336 "zoned": false, 00:36:48.336 "supported_io_types": { 00:36:48.336 "read": true, 00:36:48.336 "write": true, 00:36:48.336 "unmap": true, 00:36:48.336 "flush": true, 00:36:48.336 "reset": true, 00:36:48.336 "nvme_admin": false, 00:36:48.336 "nvme_io": false, 00:36:48.336 "nvme_io_md": false, 00:36:48.336 "write_zeroes": true, 00:36:48.336 "zcopy": true, 00:36:48.336 "get_zone_info": false, 00:36:48.336 "zone_management": false, 00:36:48.336 "zone_append": false, 00:36:48.336 "compare": false, 00:36:48.336 "compare_and_write": false, 00:36:48.336 "abort": true, 00:36:48.336 "seek_hole": false, 00:36:48.336 "seek_data": false, 00:36:48.336 "copy": true, 00:36:48.336 "nvme_iov_md": false 00:36:48.336 }, 00:36:48.336 "memory_domains": [ 00:36:48.336 { 00:36:48.336 "dma_device_id": "system", 00:36:48.336 "dma_device_type": 1 00:36:48.336 }, 00:36:48.336 { 00:36:48.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:48.336 "dma_device_type": 2 00:36:48.336 } 00:36:48.336 ], 00:36:48.336 "driver_specific": {} 00:36:48.336 } 00:36:48.336 ] 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:48.336 "name": "Existed_Raid", 00:36:48.336 "uuid": "24d1e73f-2acd-4ac4-8b23-11af313d2807", 00:36:48.336 "strip_size_kb": 0, 00:36:48.336 "state": "online", 00:36:48.336 "raid_level": "raid1", 00:36:48.336 "superblock": true, 00:36:48.336 "num_base_bdevs": 2, 00:36:48.336 "num_base_bdevs_discovered": 2, 00:36:48.336 "num_base_bdevs_operational": 2, 00:36:48.336 "base_bdevs_list": [ 00:36:48.336 { 00:36:48.336 "name": "BaseBdev1", 00:36:48.336 "uuid": "1ee76cc8-0254-40e0-b829-a4ae84ae1333", 00:36:48.336 "is_configured": true, 00:36:48.336 "data_offset": 256, 00:36:48.336 "data_size": 7936 00:36:48.336 }, 00:36:48.336 { 00:36:48.336 "name": "BaseBdev2", 00:36:48.336 "uuid": "44d9e548-4d50-4df6-a8e7-725ebd708191", 00:36:48.336 "is_configured": true, 00:36:48.336 "data_offset": 256, 00:36:48.336 "data_size": 7936 00:36:48.336 } 00:36:48.336 ] 00:36:48.336 }' 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:48.336 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.595 [2024-12-06 18:35:19.489581] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.595 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:48.595 "name": "Existed_Raid", 00:36:48.595 "aliases": [ 00:36:48.595 "24d1e73f-2acd-4ac4-8b23-11af313d2807" 00:36:48.595 ], 00:36:48.595 "product_name": "Raid Volume", 00:36:48.595 "block_size": 4096, 00:36:48.595 "num_blocks": 7936, 00:36:48.595 "uuid": "24d1e73f-2acd-4ac4-8b23-11af313d2807", 00:36:48.595 "md_size": 32, 00:36:48.595 "md_interleave": false, 00:36:48.595 "dif_type": 0, 00:36:48.595 "assigned_rate_limits": { 00:36:48.595 "rw_ios_per_sec": 0, 00:36:48.595 "rw_mbytes_per_sec": 0, 00:36:48.595 "r_mbytes_per_sec": 0, 00:36:48.595 "w_mbytes_per_sec": 0 00:36:48.595 }, 00:36:48.595 "claimed": false, 00:36:48.595 "zoned": false, 00:36:48.595 "supported_io_types": { 00:36:48.595 "read": true, 00:36:48.595 "write": true, 00:36:48.595 "unmap": false, 00:36:48.595 "flush": false, 00:36:48.595 "reset": true, 00:36:48.595 "nvme_admin": false, 00:36:48.595 "nvme_io": false, 00:36:48.595 "nvme_io_md": false, 00:36:48.595 "write_zeroes": true, 00:36:48.595 "zcopy": false, 00:36:48.595 "get_zone_info": false, 00:36:48.595 "zone_management": false, 00:36:48.595 "zone_append": false, 00:36:48.595 "compare": false, 00:36:48.595 "compare_and_write": false, 00:36:48.595 "abort": false, 00:36:48.595 "seek_hole": false, 00:36:48.595 "seek_data": false, 00:36:48.595 "copy": false, 00:36:48.595 "nvme_iov_md": false 00:36:48.595 }, 00:36:48.595 "memory_domains": [ 00:36:48.595 { 00:36:48.595 "dma_device_id": "system", 00:36:48.595 "dma_device_type": 1 00:36:48.595 }, 00:36:48.595 { 00:36:48.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:48.595 "dma_device_type": 2 00:36:48.595 }, 00:36:48.595 { 00:36:48.596 "dma_device_id": "system", 00:36:48.596 "dma_device_type": 1 00:36:48.596 }, 00:36:48.596 { 00:36:48.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:48.596 "dma_device_type": 2 00:36:48.596 } 00:36:48.596 ], 00:36:48.596 "driver_specific": { 00:36:48.596 "raid": { 00:36:48.596 "uuid": "24d1e73f-2acd-4ac4-8b23-11af313d2807", 00:36:48.596 "strip_size_kb": 0, 00:36:48.596 "state": "online", 00:36:48.596 "raid_level": "raid1", 00:36:48.596 "superblock": true, 00:36:48.596 "num_base_bdevs": 2, 00:36:48.596 "num_base_bdevs_discovered": 2, 00:36:48.596 "num_base_bdevs_operational": 2, 00:36:48.596 "base_bdevs_list": [ 00:36:48.596 { 00:36:48.596 "name": "BaseBdev1", 00:36:48.596 "uuid": "1ee76cc8-0254-40e0-b829-a4ae84ae1333", 00:36:48.596 "is_configured": true, 00:36:48.596 "data_offset": 256, 00:36:48.596 "data_size": 7936 00:36:48.596 }, 00:36:48.596 { 00:36:48.596 "name": "BaseBdev2", 00:36:48.596 "uuid": "44d9e548-4d50-4df6-a8e7-725ebd708191", 00:36:48.596 "is_configured": true, 00:36:48.596 "data_offset": 256, 00:36:48.596 "data_size": 7936 00:36:48.596 } 00:36:48.596 ] 00:36:48.596 } 00:36:48.596 } 00:36:48.596 }' 00:36:48.596 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:36:48.855 BaseBdev2' 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.855 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:48.855 [2024-12-06 18:35:19.720991] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:49.115 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.115 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:36:49.115 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:49.116 "name": "Existed_Raid", 00:36:49.116 "uuid": "24d1e73f-2acd-4ac4-8b23-11af313d2807", 00:36:49.116 "strip_size_kb": 0, 00:36:49.116 "state": "online", 00:36:49.116 "raid_level": "raid1", 00:36:49.116 "superblock": true, 00:36:49.116 "num_base_bdevs": 2, 00:36:49.116 "num_base_bdevs_discovered": 1, 00:36:49.116 "num_base_bdevs_operational": 1, 00:36:49.116 "base_bdevs_list": [ 00:36:49.116 { 00:36:49.116 "name": null, 00:36:49.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:49.116 "is_configured": false, 00:36:49.116 "data_offset": 0, 00:36:49.116 "data_size": 7936 00:36:49.116 }, 00:36:49.116 { 00:36:49.116 "name": "BaseBdev2", 00:36:49.116 "uuid": "44d9e548-4d50-4df6-a8e7-725ebd708191", 00:36:49.116 "is_configured": true, 00:36:49.116 "data_offset": 256, 00:36:49.116 "data_size": 7936 00:36:49.116 } 00:36:49.116 ] 00:36:49.116 }' 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:49.116 18:35:19 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.379 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:49.379 [2024-12-06 18:35:20.319325] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:36:49.379 [2024-12-06 18:35:20.319456] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:49.639 [2024-12-06 18:35:20.432365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:49.639 [2024-12-06 18:35:20.432622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:49.639 [2024-12-06 18:35:20.432776] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86905 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86905 ']' 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86905 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86905 00:36:49.639 killing process with pid 86905 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86905' 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86905 00:36:49.639 [2024-12-06 18:35:20.524021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:49.639 18:35:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86905 00:36:49.639 [2024-12-06 18:35:20.542010] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:51.017 18:35:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:36:51.017 00:36:51.017 real 0m5.158s 00:36:51.017 user 0m7.151s 00:36:51.017 sys 0m1.118s 00:36:51.017 ************************************ 00:36:51.017 END TEST raid_state_function_test_sb_md_separate 00:36:51.017 ************************************ 00:36:51.017 18:35:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.017 18:35:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 18:35:21 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:36:51.017 18:35:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:51.017 18:35:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.017 18:35:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 ************************************ 00:36:51.017 START TEST raid_superblock_test_md_separate 00:36:51.017 ************************************ 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87158 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87158 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87158 ']' 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.017 18:35:21 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:51.017 [2024-12-06 18:35:21.929891] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:51.018 [2024-12-06 18:35:21.930028] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87158 ] 00:36:51.276 [2024-12-06 18:35:22.118764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:51.535 [2024-12-06 18:35:22.245182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:51.535 [2024-12-06 18:35:22.479571] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:51.535 [2024-12-06 18:35:22.479618] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.104 malloc1 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.104 [2024-12-06 18:35:22.827646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:52.104 [2024-12-06 18:35:22.827718] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:52.104 [2024-12-06 18:35:22.827747] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:52.104 [2024-12-06 18:35:22.827771] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:52.104 [2024-12-06 18:35:22.830238] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:52.104 [2024-12-06 18:35:22.830407] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:52.104 pt1 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.104 malloc2 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.104 [2024-12-06 18:35:22.890161] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:52.104 [2024-12-06 18:35:22.890337] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:52.104 [2024-12-06 18:35:22.890400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:52.104 [2024-12-06 18:35:22.890515] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:52.104 [2024-12-06 18:35:22.893100] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:52.104 [2024-12-06 18:35:22.893267] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:52.104 pt2 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.104 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.104 [2024-12-06 18:35:22.902183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:52.104 [2024-12-06 18:35:22.904570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:52.105 [2024-12-06 18:35:22.904750] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:52.105 [2024-12-06 18:35:22.904767] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:52.105 [2024-12-06 18:35:22.904848] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:36:52.105 [2024-12-06 18:35:22.904969] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:52.105 [2024-12-06 18:35:22.904983] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:52.105 [2024-12-06 18:35:22.905085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:52.105 "name": "raid_bdev1", 00:36:52.105 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:52.105 "strip_size_kb": 0, 00:36:52.105 "state": "online", 00:36:52.105 "raid_level": "raid1", 00:36:52.105 "superblock": true, 00:36:52.105 "num_base_bdevs": 2, 00:36:52.105 "num_base_bdevs_discovered": 2, 00:36:52.105 "num_base_bdevs_operational": 2, 00:36:52.105 "base_bdevs_list": [ 00:36:52.105 { 00:36:52.105 "name": "pt1", 00:36:52.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:52.105 "is_configured": true, 00:36:52.105 "data_offset": 256, 00:36:52.105 "data_size": 7936 00:36:52.105 }, 00:36:52.105 { 00:36:52.105 "name": "pt2", 00:36:52.105 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:52.105 "is_configured": true, 00:36:52.105 "data_offset": 256, 00:36:52.105 "data_size": 7936 00:36:52.105 } 00:36:52.105 ] 00:36:52.105 }' 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:52.105 18:35:22 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.681 [2024-12-06 18:35:23.329808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.681 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:52.681 "name": "raid_bdev1", 00:36:52.681 "aliases": [ 00:36:52.681 "adb05e9e-7c8b-432f-8a76-cdfc8831018c" 00:36:52.681 ], 00:36:52.681 "product_name": "Raid Volume", 00:36:52.681 "block_size": 4096, 00:36:52.681 "num_blocks": 7936, 00:36:52.681 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:52.681 "md_size": 32, 00:36:52.681 "md_interleave": false, 00:36:52.681 "dif_type": 0, 00:36:52.681 "assigned_rate_limits": { 00:36:52.681 "rw_ios_per_sec": 0, 00:36:52.681 "rw_mbytes_per_sec": 0, 00:36:52.681 "r_mbytes_per_sec": 0, 00:36:52.681 "w_mbytes_per_sec": 0 00:36:52.681 }, 00:36:52.681 "claimed": false, 00:36:52.681 "zoned": false, 00:36:52.681 "supported_io_types": { 00:36:52.681 "read": true, 00:36:52.681 "write": true, 00:36:52.681 "unmap": false, 00:36:52.682 "flush": false, 00:36:52.682 "reset": true, 00:36:52.682 "nvme_admin": false, 00:36:52.682 "nvme_io": false, 00:36:52.682 "nvme_io_md": false, 00:36:52.682 "write_zeroes": true, 00:36:52.682 "zcopy": false, 00:36:52.682 "get_zone_info": false, 00:36:52.682 "zone_management": false, 00:36:52.682 "zone_append": false, 00:36:52.682 "compare": false, 00:36:52.682 "compare_and_write": false, 00:36:52.682 "abort": false, 00:36:52.682 "seek_hole": false, 00:36:52.682 "seek_data": false, 00:36:52.682 "copy": false, 00:36:52.682 "nvme_iov_md": false 00:36:52.682 }, 00:36:52.682 "memory_domains": [ 00:36:52.682 { 00:36:52.682 "dma_device_id": "system", 00:36:52.682 "dma_device_type": 1 00:36:52.682 }, 00:36:52.682 { 00:36:52.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.682 "dma_device_type": 2 00:36:52.682 }, 00:36:52.682 { 00:36:52.682 "dma_device_id": "system", 00:36:52.682 "dma_device_type": 1 00:36:52.682 }, 00:36:52.682 { 00:36:52.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:52.682 "dma_device_type": 2 00:36:52.682 } 00:36:52.682 ], 00:36:52.682 "driver_specific": { 00:36:52.682 "raid": { 00:36:52.682 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:52.682 "strip_size_kb": 0, 00:36:52.682 "state": "online", 00:36:52.682 "raid_level": "raid1", 00:36:52.682 "superblock": true, 00:36:52.682 "num_base_bdevs": 2, 00:36:52.682 "num_base_bdevs_discovered": 2, 00:36:52.682 "num_base_bdevs_operational": 2, 00:36:52.682 "base_bdevs_list": [ 00:36:52.682 { 00:36:52.682 "name": "pt1", 00:36:52.682 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:52.682 "is_configured": true, 00:36:52.682 "data_offset": 256, 00:36:52.682 "data_size": 7936 00:36:52.682 }, 00:36:52.682 { 00:36:52.682 "name": "pt2", 00:36:52.682 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:52.682 "is_configured": true, 00:36:52.682 "data_offset": 256, 00:36:52.682 "data_size": 7936 00:36:52.682 } 00:36:52.682 ] 00:36:52.682 } 00:36:52.682 } 00:36:52.682 }' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:52.682 pt2' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:36:52.682 [2024-12-06 18:35:23.553477] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=adb05e9e-7c8b-432f-8a76-cdfc8831018c 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z adb05e9e-7c8b-432f-8a76-cdfc8831018c ']' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.682 [2024-12-06 18:35:23.601135] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:52.682 [2024-12-06 18:35:23.601178] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:52.682 [2024-12-06 18:35:23.601276] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:52.682 [2024-12-06 18:35:23.601333] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:52.682 [2024-12-06 18:35:23.601349] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.682 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.973 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.973 [2024-12-06 18:35:23.728970] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:36:52.973 [2024-12-06 18:35:23.731480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:36:52.973 [2024-12-06 18:35:23.731560] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:36:52.973 [2024-12-06 18:35:23.731617] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:36:52.973 [2024-12-06 18:35:23.731634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:52.973 [2024-12-06 18:35:23.731647] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:36:52.973 request: 00:36:52.973 { 00:36:52.974 "name": "raid_bdev1", 00:36:52.974 "raid_level": "raid1", 00:36:52.974 "base_bdevs": [ 00:36:52.974 "malloc1", 00:36:52.974 "malloc2" 00:36:52.974 ], 00:36:52.974 "superblock": false, 00:36:52.974 "method": "bdev_raid_create", 00:36:52.974 "req_id": 1 00:36:52.974 } 00:36:52.974 Got JSON-RPC error response 00:36:52.974 response: 00:36:52.974 { 00:36:52.974 "code": -17, 00:36:52.974 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:36:52.974 } 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.974 [2024-12-06 18:35:23.784878] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:52.974 [2024-12-06 18:35:23.784934] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:52.974 [2024-12-06 18:35:23.784953] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:36:52.974 [2024-12-06 18:35:23.784968] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:52.974 [2024-12-06 18:35:23.787400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:52.974 [2024-12-06 18:35:23.787442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:52.974 [2024-12-06 18:35:23.787495] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:52.974 [2024-12-06 18:35:23.787552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:52.974 pt1 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:52.974 "name": "raid_bdev1", 00:36:52.974 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:52.974 "strip_size_kb": 0, 00:36:52.974 "state": "configuring", 00:36:52.974 "raid_level": "raid1", 00:36:52.974 "superblock": true, 00:36:52.974 "num_base_bdevs": 2, 00:36:52.974 "num_base_bdevs_discovered": 1, 00:36:52.974 "num_base_bdevs_operational": 2, 00:36:52.974 "base_bdevs_list": [ 00:36:52.974 { 00:36:52.974 "name": "pt1", 00:36:52.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:52.974 "is_configured": true, 00:36:52.974 "data_offset": 256, 00:36:52.974 "data_size": 7936 00:36:52.974 }, 00:36:52.974 { 00:36:52.974 "name": null, 00:36:52.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:52.974 "is_configured": false, 00:36:52.974 "data_offset": 256, 00:36:52.974 "data_size": 7936 00:36:52.974 } 00:36:52.974 ] 00:36:52.974 }' 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:52.974 18:35:23 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:53.541 [2024-12-06 18:35:24.192303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:53.541 [2024-12-06 18:35:24.192388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:53.541 [2024-12-06 18:35:24.192413] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:36:53.541 [2024-12-06 18:35:24.192428] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:53.541 [2024-12-06 18:35:24.192646] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:53.541 [2024-12-06 18:35:24.192666] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:53.541 [2024-12-06 18:35:24.192714] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:53.541 [2024-12-06 18:35:24.192739] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:53.541 [2024-12-06 18:35:24.192854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:36:53.541 [2024-12-06 18:35:24.192868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:53.541 [2024-12-06 18:35:24.192942] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:53.541 [2024-12-06 18:35:24.193059] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:36:53.541 [2024-12-06 18:35:24.193068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:36:53.541 [2024-12-06 18:35:24.193183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:53.541 pt2 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:53.541 "name": "raid_bdev1", 00:36:53.541 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:53.541 "strip_size_kb": 0, 00:36:53.541 "state": "online", 00:36:53.541 "raid_level": "raid1", 00:36:53.541 "superblock": true, 00:36:53.541 "num_base_bdevs": 2, 00:36:53.541 "num_base_bdevs_discovered": 2, 00:36:53.541 "num_base_bdevs_operational": 2, 00:36:53.541 "base_bdevs_list": [ 00:36:53.541 { 00:36:53.541 "name": "pt1", 00:36:53.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:53.541 "is_configured": true, 00:36:53.541 "data_offset": 256, 00:36:53.541 "data_size": 7936 00:36:53.541 }, 00:36:53.541 { 00:36:53.541 "name": "pt2", 00:36:53.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:53.541 "is_configured": true, 00:36:53.541 "data_offset": 256, 00:36:53.541 "data_size": 7936 00:36:53.541 } 00:36:53.541 ] 00:36:53.541 }' 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:53.541 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:36:53.800 [2024-12-06 18:35:24.631935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.800 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:36:53.800 "name": "raid_bdev1", 00:36:53.800 "aliases": [ 00:36:53.800 "adb05e9e-7c8b-432f-8a76-cdfc8831018c" 00:36:53.800 ], 00:36:53.800 "product_name": "Raid Volume", 00:36:53.800 "block_size": 4096, 00:36:53.800 "num_blocks": 7936, 00:36:53.800 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:53.800 "md_size": 32, 00:36:53.800 "md_interleave": false, 00:36:53.800 "dif_type": 0, 00:36:53.800 "assigned_rate_limits": { 00:36:53.800 "rw_ios_per_sec": 0, 00:36:53.800 "rw_mbytes_per_sec": 0, 00:36:53.800 "r_mbytes_per_sec": 0, 00:36:53.800 "w_mbytes_per_sec": 0 00:36:53.800 }, 00:36:53.800 "claimed": false, 00:36:53.800 "zoned": false, 00:36:53.800 "supported_io_types": { 00:36:53.800 "read": true, 00:36:53.801 "write": true, 00:36:53.801 "unmap": false, 00:36:53.801 "flush": false, 00:36:53.801 "reset": true, 00:36:53.801 "nvme_admin": false, 00:36:53.801 "nvme_io": false, 00:36:53.801 "nvme_io_md": false, 00:36:53.801 "write_zeroes": true, 00:36:53.801 "zcopy": false, 00:36:53.801 "get_zone_info": false, 00:36:53.801 "zone_management": false, 00:36:53.801 "zone_append": false, 00:36:53.801 "compare": false, 00:36:53.801 "compare_and_write": false, 00:36:53.801 "abort": false, 00:36:53.801 "seek_hole": false, 00:36:53.801 "seek_data": false, 00:36:53.801 "copy": false, 00:36:53.801 "nvme_iov_md": false 00:36:53.801 }, 00:36:53.801 "memory_domains": [ 00:36:53.801 { 00:36:53.801 "dma_device_id": "system", 00:36:53.801 "dma_device_type": 1 00:36:53.801 }, 00:36:53.801 { 00:36:53.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:53.801 "dma_device_type": 2 00:36:53.801 }, 00:36:53.801 { 00:36:53.801 "dma_device_id": "system", 00:36:53.801 "dma_device_type": 1 00:36:53.801 }, 00:36:53.801 { 00:36:53.801 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:36:53.801 "dma_device_type": 2 00:36:53.801 } 00:36:53.801 ], 00:36:53.801 "driver_specific": { 00:36:53.801 "raid": { 00:36:53.801 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:53.801 "strip_size_kb": 0, 00:36:53.801 "state": "online", 00:36:53.801 "raid_level": "raid1", 00:36:53.801 "superblock": true, 00:36:53.801 "num_base_bdevs": 2, 00:36:53.801 "num_base_bdevs_discovered": 2, 00:36:53.801 "num_base_bdevs_operational": 2, 00:36:53.801 "base_bdevs_list": [ 00:36:53.801 { 00:36:53.801 "name": "pt1", 00:36:53.801 "uuid": "00000000-0000-0000-0000-000000000001", 00:36:53.801 "is_configured": true, 00:36:53.801 "data_offset": 256, 00:36:53.801 "data_size": 7936 00:36:53.801 }, 00:36:53.801 { 00:36:53.801 "name": "pt2", 00:36:53.801 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:53.801 "is_configured": true, 00:36:53.801 "data_offset": 256, 00:36:53.801 "data_size": 7936 00:36:53.801 } 00:36:53.801 ] 00:36:53.801 } 00:36:53.801 } 00:36:53.801 }' 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:36:53.801 pt2' 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:53.801 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.061 [2024-12-06 18:35:24.843638] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' adb05e9e-7c8b-432f-8a76-cdfc8831018c '!=' adb05e9e-7c8b-432f-8a76-cdfc8831018c ']' 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.061 [2024-12-06 18:35:24.883360] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:54.061 "name": "raid_bdev1", 00:36:54.061 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:54.061 "strip_size_kb": 0, 00:36:54.061 "state": "online", 00:36:54.061 "raid_level": "raid1", 00:36:54.061 "superblock": true, 00:36:54.061 "num_base_bdevs": 2, 00:36:54.061 "num_base_bdevs_discovered": 1, 00:36:54.061 "num_base_bdevs_operational": 1, 00:36:54.061 "base_bdevs_list": [ 00:36:54.061 { 00:36:54.061 "name": null, 00:36:54.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.061 "is_configured": false, 00:36:54.061 "data_offset": 0, 00:36:54.061 "data_size": 7936 00:36:54.061 }, 00:36:54.061 { 00:36:54.061 "name": "pt2", 00:36:54.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:54.061 "is_configured": true, 00:36:54.061 "data_offset": 256, 00:36:54.061 "data_size": 7936 00:36:54.061 } 00:36:54.061 ] 00:36:54.061 }' 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:54.061 18:35:24 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.631 [2024-12-06 18:35:25.294789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:54.631 [2024-12-06 18:35:25.294820] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:54.631 [2024-12-06 18:35:25.294902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:54.631 [2024-12-06 18:35:25.294956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:54.631 [2024-12-06 18:35:25.294972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.631 [2024-12-06 18:35:25.362754] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:36:54.631 [2024-12-06 18:35:25.362814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:54.631 [2024-12-06 18:35:25.362833] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:36:54.631 [2024-12-06 18:35:25.362848] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:54.631 [2024-12-06 18:35:25.365461] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:54.631 [2024-12-06 18:35:25.365643] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:36:54.631 [2024-12-06 18:35:25.365715] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:36:54.631 [2024-12-06 18:35:25.365777] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:54.631 [2024-12-06 18:35:25.365886] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:36:54.631 [2024-12-06 18:35:25.365902] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:54.631 [2024-12-06 18:35:25.365982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:36:54.631 [2024-12-06 18:35:25.366094] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:36:54.631 [2024-12-06 18:35:25.366104] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:36:54.631 [2024-12-06 18:35:25.366230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:54.631 pt2 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:54.631 "name": "raid_bdev1", 00:36:54.631 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:54.631 "strip_size_kb": 0, 00:36:54.631 "state": "online", 00:36:54.631 "raid_level": "raid1", 00:36:54.631 "superblock": true, 00:36:54.631 "num_base_bdevs": 2, 00:36:54.631 "num_base_bdevs_discovered": 1, 00:36:54.631 "num_base_bdevs_operational": 1, 00:36:54.631 "base_bdevs_list": [ 00:36:54.631 { 00:36:54.631 "name": null, 00:36:54.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:54.631 "is_configured": false, 00:36:54.631 "data_offset": 256, 00:36:54.631 "data_size": 7936 00:36:54.631 }, 00:36:54.631 { 00:36:54.631 "name": "pt2", 00:36:54.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:54.631 "is_configured": true, 00:36:54.631 "data_offset": 256, 00:36:54.631 "data_size": 7936 00:36:54.631 } 00:36:54.631 ] 00:36:54.631 }' 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:54.631 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.891 [2024-12-06 18:35:25.774162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:54.891 [2024-12-06 18:35:25.774189] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:36:54.891 [2024-12-06 18:35:25.774246] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:54.891 [2024-12-06 18:35:25.774296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:54.891 [2024-12-06 18:35:25.774307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:54.891 [2024-12-06 18:35:25.830111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:36:54.891 [2024-12-06 18:35:25.830171] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:54.891 [2024-12-06 18:35:25.830193] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:36:54.891 [2024-12-06 18:35:25.830205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:54.891 [2024-12-06 18:35:25.832749] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:54.891 [2024-12-06 18:35:25.832790] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:36:54.891 [2024-12-06 18:35:25.832842] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:36:54.891 [2024-12-06 18:35:25.832883] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:36:54.891 [2024-12-06 18:35:25.832997] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:36:54.891 [2024-12-06 18:35:25.833008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:36:54.891 [2024-12-06 18:35:25.833026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:36:54.891 [2024-12-06 18:35:25.833096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:36:54.891 [2024-12-06 18:35:25.833211] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:36:54.891 [2024-12-06 18:35:25.833223] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:54.891 [2024-12-06 18:35:25.833287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:54.891 [2024-12-06 18:35:25.833394] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:36:54.891 [2024-12-06 18:35:25.833406] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:36:54.891 [2024-12-06 18:35:25.833518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:54.891 pt1 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:54.891 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:55.151 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:55.151 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.151 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:55.151 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:55.151 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.151 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:55.151 "name": "raid_bdev1", 00:36:55.151 "uuid": "adb05e9e-7c8b-432f-8a76-cdfc8831018c", 00:36:55.151 "strip_size_kb": 0, 00:36:55.151 "state": "online", 00:36:55.151 "raid_level": "raid1", 00:36:55.151 "superblock": true, 00:36:55.151 "num_base_bdevs": 2, 00:36:55.151 "num_base_bdevs_discovered": 1, 00:36:55.151 "num_base_bdevs_operational": 1, 00:36:55.151 "base_bdevs_list": [ 00:36:55.151 { 00:36:55.151 "name": null, 00:36:55.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:36:55.151 "is_configured": false, 00:36:55.151 "data_offset": 256, 00:36:55.151 "data_size": 7936 00:36:55.151 }, 00:36:55.151 { 00:36:55.151 "name": "pt2", 00:36:55.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:36:55.151 "is_configured": true, 00:36:55.151 "data_offset": 256, 00:36:55.151 "data_size": 7936 00:36:55.151 } 00:36:55.151 ] 00:36:55.151 }' 00:36:55.151 18:35:25 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:55.151 18:35:25 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:36:55.410 [2024-12-06 18:35:26.313650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:55.410 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' adb05e9e-7c8b-432f-8a76-cdfc8831018c '!=' adb05e9e-7c8b-432f-8a76-cdfc8831018c ']' 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87158 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87158 ']' 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87158 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87158 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:55.670 killing process with pid 87158 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87158' 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87158 00:36:55.670 [2024-12-06 18:35:26.401214] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:36:55.670 [2024-12-06 18:35:26.401312] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:36:55.670 [2024-12-06 18:35:26.401365] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:36:55.670 18:35:26 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87158 00:36:55.670 [2024-12-06 18:35:26.401387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:36:55.929 [2024-12-06 18:35:26.633130] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:36:57.310 18:35:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:36:57.310 00:36:57.310 real 0m6.001s 00:36:57.310 user 0m8.857s 00:36:57.310 sys 0m1.282s 00:36:57.310 18:35:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.310 ************************************ 00:36:57.310 END TEST raid_superblock_test_md_separate 00:36:57.310 ************************************ 00:36:57.310 18:35:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:57.310 18:35:27 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:36:57.310 18:35:27 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:36:57.310 18:35:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:36:57.310 18:35:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:57.310 18:35:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:36:57.310 ************************************ 00:36:57.310 START TEST raid_rebuild_test_sb_md_separate 00:36:57.310 ************************************ 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87481 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87481 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87481 ']' 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:57.310 18:35:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:57.310 I/O size of 3145728 is greater than zero copy threshold (65536). 00:36:57.310 Zero copy mechanism will not be used. 00:36:57.310 [2024-12-06 18:35:28.030467] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:36:57.310 [2024-12-06 18:35:28.030603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87481 ] 00:36:57.310 [2024-12-06 18:35:28.213218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.570 [2024-12-06 18:35:28.343138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.830 [2024-12-06 18:35:28.583952] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:57.830 [2024-12-06 18:35:28.583994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:36:58.090 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:58.090 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:36:58.090 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:58.090 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:36:58.090 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.090 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.091 BaseBdev1_malloc 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.091 [2024-12-06 18:35:28.901493] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:36:58.091 [2024-12-06 18:35:28.901562] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:58.091 [2024-12-06 18:35:28.901588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:36:58.091 [2024-12-06 18:35:28.901603] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:58.091 [2024-12-06 18:35:28.904123] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:58.091 [2024-12-06 18:35:28.904175] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:36:58.091 BaseBdev1 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.091 BaseBdev2_malloc 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.091 [2024-12-06 18:35:28.961875] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:36:58.091 [2024-12-06 18:35:28.961938] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:58.091 [2024-12-06 18:35:28.961960] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:36:58.091 [2024-12-06 18:35:28.961976] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:58.091 [2024-12-06 18:35:28.964494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:58.091 [2024-12-06 18:35:28.964687] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:36:58.091 BaseBdev2 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.091 18:35:28 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.091 spare_malloc 00:36:58.091 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.091 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:36:58.091 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.091 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.350 spare_delay 00:36:58.350 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.350 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:36:58.350 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.350 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.350 [2024-12-06 18:35:29.047009] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:36:58.351 [2024-12-06 18:35:29.047072] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:36:58.351 [2024-12-06 18:35:29.047096] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:36:58.351 [2024-12-06 18:35:29.047111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:36:58.351 [2024-12-06 18:35:29.049604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:36:58.351 [2024-12-06 18:35:29.049647] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:36:58.351 spare 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.351 [2024-12-06 18:35:29.059048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:36:58.351 [2024-12-06 18:35:29.061634] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:36:58.351 [2024-12-06 18:35:29.061823] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:36:58.351 [2024-12-06 18:35:29.061840] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:36:58.351 [2024-12-06 18:35:29.061923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:36:58.351 [2024-12-06 18:35:29.062055] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:36:58.351 [2024-12-06 18:35:29.062067] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:36:58.351 [2024-12-06 18:35:29.062213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:36:58.351 "name": "raid_bdev1", 00:36:58.351 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:36:58.351 "strip_size_kb": 0, 00:36:58.351 "state": "online", 00:36:58.351 "raid_level": "raid1", 00:36:58.351 "superblock": true, 00:36:58.351 "num_base_bdevs": 2, 00:36:58.351 "num_base_bdevs_discovered": 2, 00:36:58.351 "num_base_bdevs_operational": 2, 00:36:58.351 "base_bdevs_list": [ 00:36:58.351 { 00:36:58.351 "name": "BaseBdev1", 00:36:58.351 "uuid": "86804576-71da-5436-9ff7-a8761535ba30", 00:36:58.351 "is_configured": true, 00:36:58.351 "data_offset": 256, 00:36:58.351 "data_size": 7936 00:36:58.351 }, 00:36:58.351 { 00:36:58.351 "name": "BaseBdev2", 00:36:58.351 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:36:58.351 "is_configured": true, 00:36:58.351 "data_offset": 256, 00:36:58.351 "data_size": 7936 00:36:58.351 } 00:36:58.351 ] 00:36:58.351 }' 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:36:58.351 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.611 [2024-12-06 18:35:29.474708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:36:58.611 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:36:58.870 [2024-12-06 18:35:29.734204] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:36:58.870 /dev/nbd0 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:36:58.870 1+0 records in 00:36:58.870 1+0 records out 00:36:58.870 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198238 s, 20.7 MB/s 00:36:58.870 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:36:58.871 18:35:29 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:36:59.810 7936+0 records in 00:36:59.810 7936+0 records out 00:36:59.810 32505856 bytes (33 MB, 31 MiB) copied, 0.704139 s, 46.2 MB/s 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:36:59.810 [2024-12-06 18:35:30.721692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:36:59.810 [2024-12-06 18:35:30.736246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.810 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:00.069 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.069 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:00.069 "name": "raid_bdev1", 00:37:00.069 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:00.069 "strip_size_kb": 0, 00:37:00.069 "state": "online", 00:37:00.069 "raid_level": "raid1", 00:37:00.069 "superblock": true, 00:37:00.069 "num_base_bdevs": 2, 00:37:00.069 "num_base_bdevs_discovered": 1, 00:37:00.069 "num_base_bdevs_operational": 1, 00:37:00.069 "base_bdevs_list": [ 00:37:00.069 { 00:37:00.069 "name": null, 00:37:00.069 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:00.069 "is_configured": false, 00:37:00.069 "data_offset": 0, 00:37:00.069 "data_size": 7936 00:37:00.069 }, 00:37:00.069 { 00:37:00.069 "name": "BaseBdev2", 00:37:00.069 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:00.069 "is_configured": true, 00:37:00.069 "data_offset": 256, 00:37:00.069 "data_size": 7936 00:37:00.069 } 00:37:00.069 ] 00:37:00.069 }' 00:37:00.069 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:00.069 18:35:30 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:00.329 18:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:00.329 18:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:00.329 18:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:00.329 [2024-12-06 18:35:31.171595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:00.329 [2024-12-06 18:35:31.188241] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:37:00.329 18:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:00.329 18:35:31 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:00.329 [2024-12-06 18:35:31.190702] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.267 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:01.527 "name": "raid_bdev1", 00:37:01.527 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:01.527 "strip_size_kb": 0, 00:37:01.527 "state": "online", 00:37:01.527 "raid_level": "raid1", 00:37:01.527 "superblock": true, 00:37:01.527 "num_base_bdevs": 2, 00:37:01.527 "num_base_bdevs_discovered": 2, 00:37:01.527 "num_base_bdevs_operational": 2, 00:37:01.527 "process": { 00:37:01.527 "type": "rebuild", 00:37:01.527 "target": "spare", 00:37:01.527 "progress": { 00:37:01.527 "blocks": 2560, 00:37:01.527 "percent": 32 00:37:01.527 } 00:37:01.527 }, 00:37:01.527 "base_bdevs_list": [ 00:37:01.527 { 00:37:01.527 "name": "spare", 00:37:01.527 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:01.527 "is_configured": true, 00:37:01.527 "data_offset": 256, 00:37:01.527 "data_size": 7936 00:37:01.527 }, 00:37:01.527 { 00:37:01.527 "name": "BaseBdev2", 00:37:01.527 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:01.527 "is_configured": true, 00:37:01.527 "data_offset": 256, 00:37:01.527 "data_size": 7936 00:37:01.527 } 00:37:01.527 ] 00:37:01.527 }' 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:01.527 [2024-12-06 18:35:32.330864] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:01.527 [2024-12-06 18:35:32.399677] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:01.527 [2024-12-06 18:35:32.399747] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:01.527 [2024-12-06 18:35:32.399764] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:01.527 [2024-12-06 18:35:32.399779] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:01.527 "name": "raid_bdev1", 00:37:01.527 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:01.527 "strip_size_kb": 0, 00:37:01.527 "state": "online", 00:37:01.527 "raid_level": "raid1", 00:37:01.527 "superblock": true, 00:37:01.527 "num_base_bdevs": 2, 00:37:01.527 "num_base_bdevs_discovered": 1, 00:37:01.527 "num_base_bdevs_operational": 1, 00:37:01.527 "base_bdevs_list": [ 00:37:01.527 { 00:37:01.527 "name": null, 00:37:01.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:01.527 "is_configured": false, 00:37:01.527 "data_offset": 0, 00:37:01.527 "data_size": 7936 00:37:01.527 }, 00:37:01.527 { 00:37:01.527 "name": "BaseBdev2", 00:37:01.527 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:01.527 "is_configured": true, 00:37:01.527 "data_offset": 256, 00:37:01.527 "data_size": 7936 00:37:01.527 } 00:37:01.527 ] 00:37:01.527 }' 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:01.527 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:02.096 "name": "raid_bdev1", 00:37:02.096 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:02.096 "strip_size_kb": 0, 00:37:02.096 "state": "online", 00:37:02.096 "raid_level": "raid1", 00:37:02.096 "superblock": true, 00:37:02.096 "num_base_bdevs": 2, 00:37:02.096 "num_base_bdevs_discovered": 1, 00:37:02.096 "num_base_bdevs_operational": 1, 00:37:02.096 "base_bdevs_list": [ 00:37:02.096 { 00:37:02.096 "name": null, 00:37:02.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:02.096 "is_configured": false, 00:37:02.096 "data_offset": 0, 00:37:02.096 "data_size": 7936 00:37:02.096 }, 00:37:02.096 { 00:37:02.096 "name": "BaseBdev2", 00:37:02.096 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:02.096 "is_configured": true, 00:37:02.096 "data_offset": 256, 00:37:02.096 "data_size": 7936 00:37:02.096 } 00:37:02.096 ] 00:37:02.096 }' 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:02.096 [2024-12-06 18:35:32.955532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:02.096 [2024-12-06 18:35:32.969509] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:02.096 18:35:32 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:02.096 [2024-12-06 18:35:32.971934] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:03.034 18:35:33 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:03.294 "name": "raid_bdev1", 00:37:03.294 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:03.294 "strip_size_kb": 0, 00:37:03.294 "state": "online", 00:37:03.294 "raid_level": "raid1", 00:37:03.294 "superblock": true, 00:37:03.294 "num_base_bdevs": 2, 00:37:03.294 "num_base_bdevs_discovered": 2, 00:37:03.294 "num_base_bdevs_operational": 2, 00:37:03.294 "process": { 00:37:03.294 "type": "rebuild", 00:37:03.294 "target": "spare", 00:37:03.294 "progress": { 00:37:03.294 "blocks": 2560, 00:37:03.294 "percent": 32 00:37:03.294 } 00:37:03.294 }, 00:37:03.294 "base_bdevs_list": [ 00:37:03.294 { 00:37:03.294 "name": "spare", 00:37:03.294 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:03.294 "is_configured": true, 00:37:03.294 "data_offset": 256, 00:37:03.294 "data_size": 7936 00:37:03.294 }, 00:37:03.294 { 00:37:03.294 "name": "BaseBdev2", 00:37:03.294 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:03.294 "is_configured": true, 00:37:03.294 "data_offset": 256, 00:37:03.294 "data_size": 7936 00:37:03.294 } 00:37:03.294 ] 00:37:03.294 }' 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:37:03.294 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=711 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:03.294 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:03.294 "name": "raid_bdev1", 00:37:03.294 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:03.294 "strip_size_kb": 0, 00:37:03.294 "state": "online", 00:37:03.294 "raid_level": "raid1", 00:37:03.294 "superblock": true, 00:37:03.294 "num_base_bdevs": 2, 00:37:03.294 "num_base_bdevs_discovered": 2, 00:37:03.294 "num_base_bdevs_operational": 2, 00:37:03.294 "process": { 00:37:03.294 "type": "rebuild", 00:37:03.294 "target": "spare", 00:37:03.294 "progress": { 00:37:03.294 "blocks": 2816, 00:37:03.294 "percent": 35 00:37:03.294 } 00:37:03.294 }, 00:37:03.294 "base_bdevs_list": [ 00:37:03.294 { 00:37:03.294 "name": "spare", 00:37:03.294 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:03.294 "is_configured": true, 00:37:03.294 "data_offset": 256, 00:37:03.294 "data_size": 7936 00:37:03.294 }, 00:37:03.294 { 00:37:03.294 "name": "BaseBdev2", 00:37:03.295 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:03.295 "is_configured": true, 00:37:03.295 "data_offset": 256, 00:37:03.295 "data_size": 7936 00:37:03.295 } 00:37:03.295 ] 00:37:03.295 }' 00:37:03.295 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:03.295 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:03.295 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:03.554 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:03.554 18:35:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:04.492 "name": "raid_bdev1", 00:37:04.492 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:04.492 "strip_size_kb": 0, 00:37:04.492 "state": "online", 00:37:04.492 "raid_level": "raid1", 00:37:04.492 "superblock": true, 00:37:04.492 "num_base_bdevs": 2, 00:37:04.492 "num_base_bdevs_discovered": 2, 00:37:04.492 "num_base_bdevs_operational": 2, 00:37:04.492 "process": { 00:37:04.492 "type": "rebuild", 00:37:04.492 "target": "spare", 00:37:04.492 "progress": { 00:37:04.492 "blocks": 5632, 00:37:04.492 "percent": 70 00:37:04.492 } 00:37:04.492 }, 00:37:04.492 "base_bdevs_list": [ 00:37:04.492 { 00:37:04.492 "name": "spare", 00:37:04.492 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:04.492 "is_configured": true, 00:37:04.492 "data_offset": 256, 00:37:04.492 "data_size": 7936 00:37:04.492 }, 00:37:04.492 { 00:37:04.492 "name": "BaseBdev2", 00:37:04.492 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:04.492 "is_configured": true, 00:37:04.492 "data_offset": 256, 00:37:04.492 "data_size": 7936 00:37:04.492 } 00:37:04.492 ] 00:37:04.492 }' 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:04.492 18:35:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:05.429 [2024-12-06 18:35:36.094448] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:05.429 [2024-12-06 18:35:36.094547] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:05.429 [2024-12-06 18:35:36.094664] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:05.690 "name": "raid_bdev1", 00:37:05.690 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:05.690 "strip_size_kb": 0, 00:37:05.690 "state": "online", 00:37:05.690 "raid_level": "raid1", 00:37:05.690 "superblock": true, 00:37:05.690 "num_base_bdevs": 2, 00:37:05.690 "num_base_bdevs_discovered": 2, 00:37:05.690 "num_base_bdevs_operational": 2, 00:37:05.690 "base_bdevs_list": [ 00:37:05.690 { 00:37:05.690 "name": "spare", 00:37:05.690 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:05.690 "is_configured": true, 00:37:05.690 "data_offset": 256, 00:37:05.690 "data_size": 7936 00:37:05.690 }, 00:37:05.690 { 00:37:05.690 "name": "BaseBdev2", 00:37:05.690 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:05.690 "is_configured": true, 00:37:05.690 "data_offset": 256, 00:37:05.690 "data_size": 7936 00:37:05.690 } 00:37:05.690 ] 00:37:05.690 }' 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:05.690 "name": "raid_bdev1", 00:37:05.690 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:05.690 "strip_size_kb": 0, 00:37:05.690 "state": "online", 00:37:05.690 "raid_level": "raid1", 00:37:05.690 "superblock": true, 00:37:05.690 "num_base_bdevs": 2, 00:37:05.690 "num_base_bdevs_discovered": 2, 00:37:05.690 "num_base_bdevs_operational": 2, 00:37:05.690 "base_bdevs_list": [ 00:37:05.690 { 00:37:05.690 "name": "spare", 00:37:05.690 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:05.690 "is_configured": true, 00:37:05.690 "data_offset": 256, 00:37:05.690 "data_size": 7936 00:37:05.690 }, 00:37:05.690 { 00:37:05.690 "name": "BaseBdev2", 00:37:05.690 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:05.690 "is_configured": true, 00:37:05.690 "data_offset": 256, 00:37:05.690 "data_size": 7936 00:37:05.690 } 00:37:05.690 ] 00:37:05.690 }' 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:05.690 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:05.951 "name": "raid_bdev1", 00:37:05.951 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:05.951 "strip_size_kb": 0, 00:37:05.951 "state": "online", 00:37:05.951 "raid_level": "raid1", 00:37:05.951 "superblock": true, 00:37:05.951 "num_base_bdevs": 2, 00:37:05.951 "num_base_bdevs_discovered": 2, 00:37:05.951 "num_base_bdevs_operational": 2, 00:37:05.951 "base_bdevs_list": [ 00:37:05.951 { 00:37:05.951 "name": "spare", 00:37:05.951 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:05.951 "is_configured": true, 00:37:05.951 "data_offset": 256, 00:37:05.951 "data_size": 7936 00:37:05.951 }, 00:37:05.951 { 00:37:05.951 "name": "BaseBdev2", 00:37:05.951 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:05.951 "is_configured": true, 00:37:05.951 "data_offset": 256, 00:37:05.951 "data_size": 7936 00:37:05.951 } 00:37:05.951 ] 00:37:05.951 }' 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:05.951 18:35:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:06.211 [2024-12-06 18:35:37.009297] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:06.211 [2024-12-06 18:35:37.009337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:06.211 [2024-12-06 18:35:37.009426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:06.211 [2024-12-06 18:35:37.009501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:06.211 [2024-12-06 18:35:37.009513] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:06.211 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:37:06.471 /dev/nbd0 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:06.471 1+0 records in 00:37:06.471 1+0 records out 00:37:06.471 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000413746 s, 9.9 MB/s 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:06.471 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:37:06.731 /dev/nbd1 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:37:06.731 1+0 records in 00:37:06.731 1+0 records out 00:37:06.731 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000502538 s, 8.2 MB/s 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:37:06.731 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:37:06.991 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:37:06.991 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:37:06.991 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:37:06.991 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:37:06.991 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:37:06.991 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:06.991 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:37:07.263 18:35:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:37:07.263 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:07.534 [2024-12-06 18:35:38.231672] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:07.534 [2024-12-06 18:35:38.231737] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:07.534 [2024-12-06 18:35:38.231766] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:37:07.534 [2024-12-06 18:35:38.231779] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:07.534 [2024-12-06 18:35:38.234402] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:07.534 [2024-12-06 18:35:38.234444] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:07.534 [2024-12-06 18:35:38.234522] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:07.534 [2024-12-06 18:35:38.234582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:07.534 [2024-12-06 18:35:38.234757] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:07.534 spare 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:07.534 [2024-12-06 18:35:38.334687] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:37:07.534 [2024-12-06 18:35:38.334738] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:37:07.534 [2024-12-06 18:35:38.334854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:37:07.534 [2024-12-06 18:35:38.335016] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:37:07.534 [2024-12-06 18:35:38.335027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:37:07.534 [2024-12-06 18:35:38.335200] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:07.534 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:07.535 "name": "raid_bdev1", 00:37:07.535 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:07.535 "strip_size_kb": 0, 00:37:07.535 "state": "online", 00:37:07.535 "raid_level": "raid1", 00:37:07.535 "superblock": true, 00:37:07.535 "num_base_bdevs": 2, 00:37:07.535 "num_base_bdevs_discovered": 2, 00:37:07.535 "num_base_bdevs_operational": 2, 00:37:07.535 "base_bdevs_list": [ 00:37:07.535 { 00:37:07.535 "name": "spare", 00:37:07.535 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:07.535 "is_configured": true, 00:37:07.535 "data_offset": 256, 00:37:07.535 "data_size": 7936 00:37:07.535 }, 00:37:07.535 { 00:37:07.535 "name": "BaseBdev2", 00:37:07.535 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:07.535 "is_configured": true, 00:37:07.535 "data_offset": 256, 00:37:07.535 "data_size": 7936 00:37:07.535 } 00:37:07.535 ] 00:37:07.535 }' 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:07.535 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.103 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:08.104 "name": "raid_bdev1", 00:37:08.104 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:08.104 "strip_size_kb": 0, 00:37:08.104 "state": "online", 00:37:08.104 "raid_level": "raid1", 00:37:08.104 "superblock": true, 00:37:08.104 "num_base_bdevs": 2, 00:37:08.104 "num_base_bdevs_discovered": 2, 00:37:08.104 "num_base_bdevs_operational": 2, 00:37:08.104 "base_bdevs_list": [ 00:37:08.104 { 00:37:08.104 "name": "spare", 00:37:08.104 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:08.104 "is_configured": true, 00:37:08.104 "data_offset": 256, 00:37:08.104 "data_size": 7936 00:37:08.104 }, 00:37:08.104 { 00:37:08.104 "name": "BaseBdev2", 00:37:08.104 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:08.104 "is_configured": true, 00:37:08.104 "data_offset": 256, 00:37:08.104 "data_size": 7936 00:37:08.104 } 00:37:08.104 ] 00:37:08.104 }' 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:08.104 [2024-12-06 18:35:38.959010] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:08.104 18:35:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.104 18:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:08.104 "name": "raid_bdev1", 00:37:08.104 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:08.104 "strip_size_kb": 0, 00:37:08.104 "state": "online", 00:37:08.104 "raid_level": "raid1", 00:37:08.104 "superblock": true, 00:37:08.104 "num_base_bdevs": 2, 00:37:08.104 "num_base_bdevs_discovered": 1, 00:37:08.104 "num_base_bdevs_operational": 1, 00:37:08.104 "base_bdevs_list": [ 00:37:08.104 { 00:37:08.104 "name": null, 00:37:08.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:08.104 "is_configured": false, 00:37:08.104 "data_offset": 0, 00:37:08.104 "data_size": 7936 00:37:08.104 }, 00:37:08.104 { 00:37:08.104 "name": "BaseBdev2", 00:37:08.104 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:08.104 "is_configured": true, 00:37:08.104 "data_offset": 256, 00:37:08.104 "data_size": 7936 00:37:08.104 } 00:37:08.104 ] 00:37:08.104 }' 00:37:08.104 18:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:08.104 18:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:08.673 18:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:08.673 18:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:08.673 18:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:08.673 [2024-12-06 18:35:39.346481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:08.673 [2024-12-06 18:35:39.346691] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:08.673 [2024-12-06 18:35:39.346713] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:08.673 [2024-12-06 18:35:39.346750] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:08.673 [2024-12-06 18:35:39.361094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:37:08.673 18:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:08.673 18:35:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:37:08.673 [2024-12-06 18:35:39.363510] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:09.611 "name": "raid_bdev1", 00:37:09.611 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:09.611 "strip_size_kb": 0, 00:37:09.611 "state": "online", 00:37:09.611 "raid_level": "raid1", 00:37:09.611 "superblock": true, 00:37:09.611 "num_base_bdevs": 2, 00:37:09.611 "num_base_bdevs_discovered": 2, 00:37:09.611 "num_base_bdevs_operational": 2, 00:37:09.611 "process": { 00:37:09.611 "type": "rebuild", 00:37:09.611 "target": "spare", 00:37:09.611 "progress": { 00:37:09.611 "blocks": 2560, 00:37:09.611 "percent": 32 00:37:09.611 } 00:37:09.611 }, 00:37:09.611 "base_bdevs_list": [ 00:37:09.611 { 00:37:09.611 "name": "spare", 00:37:09.611 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:09.611 "is_configured": true, 00:37:09.611 "data_offset": 256, 00:37:09.611 "data_size": 7936 00:37:09.611 }, 00:37:09.611 { 00:37:09.611 "name": "BaseBdev2", 00:37:09.611 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:09.611 "is_configured": true, 00:37:09.611 "data_offset": 256, 00:37:09.611 "data_size": 7936 00:37:09.611 } 00:37:09.611 ] 00:37:09.611 }' 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:09.611 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:09.612 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:09.612 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:37:09.612 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.612 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:09.612 [2024-12-06 18:35:40.492883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:09.871 [2024-12-06 18:35:40.572688] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:09.871 [2024-12-06 18:35:40.572760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:09.871 [2024-12-06 18:35:40.572777] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:09.871 [2024-12-06 18:35:40.572802] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:09.871 "name": "raid_bdev1", 00:37:09.871 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:09.871 "strip_size_kb": 0, 00:37:09.871 "state": "online", 00:37:09.871 "raid_level": "raid1", 00:37:09.871 "superblock": true, 00:37:09.871 "num_base_bdevs": 2, 00:37:09.871 "num_base_bdevs_discovered": 1, 00:37:09.871 "num_base_bdevs_operational": 1, 00:37:09.871 "base_bdevs_list": [ 00:37:09.871 { 00:37:09.871 "name": null, 00:37:09.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:09.871 "is_configured": false, 00:37:09.871 "data_offset": 0, 00:37:09.871 "data_size": 7936 00:37:09.871 }, 00:37:09.871 { 00:37:09.871 "name": "BaseBdev2", 00:37:09.871 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:09.871 "is_configured": true, 00:37:09.871 "data_offset": 256, 00:37:09.871 "data_size": 7936 00:37:09.871 } 00:37:09.871 ] 00:37:09.871 }' 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:09.871 18:35:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:10.131 18:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:10.131 18:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:10.131 18:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:10.131 [2024-12-06 18:35:41.009646] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:10.131 [2024-12-06 18:35:41.009721] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:10.131 [2024-12-06 18:35:41.009753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:37:10.131 [2024-12-06 18:35:41.009768] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:10.131 [2024-12-06 18:35:41.010064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:10.131 [2024-12-06 18:35:41.010090] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:10.131 [2024-12-06 18:35:41.010167] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:10.131 [2024-12-06 18:35:41.010185] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:10.131 [2024-12-06 18:35:41.010198] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:10.131 [2024-12-06 18:35:41.010227] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:10.131 [2024-12-06 18:35:41.024264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:37:10.131 spare 00:37:10.131 18:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:10.131 18:35:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:37:10.131 [2024-12-06 18:35:41.026736] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:11.507 "name": "raid_bdev1", 00:37:11.507 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:11.507 "strip_size_kb": 0, 00:37:11.507 "state": "online", 00:37:11.507 "raid_level": "raid1", 00:37:11.507 "superblock": true, 00:37:11.507 "num_base_bdevs": 2, 00:37:11.507 "num_base_bdevs_discovered": 2, 00:37:11.507 "num_base_bdevs_operational": 2, 00:37:11.507 "process": { 00:37:11.507 "type": "rebuild", 00:37:11.507 "target": "spare", 00:37:11.507 "progress": { 00:37:11.507 "blocks": 2560, 00:37:11.507 "percent": 32 00:37:11.507 } 00:37:11.507 }, 00:37:11.507 "base_bdevs_list": [ 00:37:11.507 { 00:37:11.507 "name": "spare", 00:37:11.507 "uuid": "e3597a99-d097-5f2f-904d-0efd37bd7522", 00:37:11.507 "is_configured": true, 00:37:11.507 "data_offset": 256, 00:37:11.507 "data_size": 7936 00:37:11.507 }, 00:37:11.507 { 00:37:11.507 "name": "BaseBdev2", 00:37:11.507 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:11.507 "is_configured": true, 00:37:11.507 "data_offset": 256, 00:37:11.507 "data_size": 7936 00:37:11.507 } 00:37:11.507 ] 00:37:11.507 }' 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:11.507 [2024-12-06 18:35:42.164233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:11.507 [2024-12-06 18:35:42.235330] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:11.507 [2024-12-06 18:35:42.235393] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:11.507 [2024-12-06 18:35:42.235414] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:11.507 [2024-12-06 18:35:42.235424] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:11.507 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:11.508 "name": "raid_bdev1", 00:37:11.508 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:11.508 "strip_size_kb": 0, 00:37:11.508 "state": "online", 00:37:11.508 "raid_level": "raid1", 00:37:11.508 "superblock": true, 00:37:11.508 "num_base_bdevs": 2, 00:37:11.508 "num_base_bdevs_discovered": 1, 00:37:11.508 "num_base_bdevs_operational": 1, 00:37:11.508 "base_bdevs_list": [ 00:37:11.508 { 00:37:11.508 "name": null, 00:37:11.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:11.508 "is_configured": false, 00:37:11.508 "data_offset": 0, 00:37:11.508 "data_size": 7936 00:37:11.508 }, 00:37:11.508 { 00:37:11.508 "name": "BaseBdev2", 00:37:11.508 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:11.508 "is_configured": true, 00:37:11.508 "data_offset": 256, 00:37:11.508 "data_size": 7936 00:37:11.508 } 00:37:11.508 ] 00:37:11.508 }' 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:11.508 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:11.765 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:12.023 "name": "raid_bdev1", 00:37:12.023 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:12.023 "strip_size_kb": 0, 00:37:12.023 "state": "online", 00:37:12.023 "raid_level": "raid1", 00:37:12.023 "superblock": true, 00:37:12.023 "num_base_bdevs": 2, 00:37:12.023 "num_base_bdevs_discovered": 1, 00:37:12.023 "num_base_bdevs_operational": 1, 00:37:12.023 "base_bdevs_list": [ 00:37:12.023 { 00:37:12.023 "name": null, 00:37:12.023 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:12.023 "is_configured": false, 00:37:12.023 "data_offset": 0, 00:37:12.023 "data_size": 7936 00:37:12.023 }, 00:37:12.023 { 00:37:12.023 "name": "BaseBdev2", 00:37:12.023 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:12.023 "is_configured": true, 00:37:12.023 "data_offset": 256, 00:37:12.023 "data_size": 7936 00:37:12.023 } 00:37:12.023 ] 00:37:12.023 }' 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:12.023 [2024-12-06 18:35:42.813081] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:12.023 [2024-12-06 18:35:42.813155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:12.023 [2024-12-06 18:35:42.813183] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:37:12.023 [2024-12-06 18:35:42.813196] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:12.023 [2024-12-06 18:35:42.813462] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:12.023 [2024-12-06 18:35:42.813484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:12.023 [2024-12-06 18:35:42.813540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:12.023 [2024-12-06 18:35:42.813556] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:12.023 [2024-12-06 18:35:42.813573] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:12.023 [2024-12-06 18:35:42.813586] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:37:12.023 BaseBdev1 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.023 18:35:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:12.958 "name": "raid_bdev1", 00:37:12.958 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:12.958 "strip_size_kb": 0, 00:37:12.958 "state": "online", 00:37:12.958 "raid_level": "raid1", 00:37:12.958 "superblock": true, 00:37:12.958 "num_base_bdevs": 2, 00:37:12.958 "num_base_bdevs_discovered": 1, 00:37:12.958 "num_base_bdevs_operational": 1, 00:37:12.958 "base_bdevs_list": [ 00:37:12.958 { 00:37:12.958 "name": null, 00:37:12.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:12.958 "is_configured": false, 00:37:12.958 "data_offset": 0, 00:37:12.958 "data_size": 7936 00:37:12.958 }, 00:37:12.958 { 00:37:12.958 "name": "BaseBdev2", 00:37:12.958 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:12.958 "is_configured": true, 00:37:12.958 "data_offset": 256, 00:37:12.958 "data_size": 7936 00:37:12.958 } 00:37:12.958 ] 00:37:12.958 }' 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:12.958 18:35:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:13.524 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:13.524 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:13.524 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:13.525 "name": "raid_bdev1", 00:37:13.525 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:13.525 "strip_size_kb": 0, 00:37:13.525 "state": "online", 00:37:13.525 "raid_level": "raid1", 00:37:13.525 "superblock": true, 00:37:13.525 "num_base_bdevs": 2, 00:37:13.525 "num_base_bdevs_discovered": 1, 00:37:13.525 "num_base_bdevs_operational": 1, 00:37:13.525 "base_bdevs_list": [ 00:37:13.525 { 00:37:13.525 "name": null, 00:37:13.525 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:13.525 "is_configured": false, 00:37:13.525 "data_offset": 0, 00:37:13.525 "data_size": 7936 00:37:13.525 }, 00:37:13.525 { 00:37:13.525 "name": "BaseBdev2", 00:37:13.525 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:13.525 "is_configured": true, 00:37:13.525 "data_offset": 256, 00:37:13.525 "data_size": 7936 00:37:13.525 } 00:37:13.525 ] 00:37:13.525 }' 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:13.525 [2024-12-06 18:35:44.398911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:13.525 [2024-12-06 18:35:44.399088] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:13.525 [2024-12-06 18:35:44.399108] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:13.525 request: 00:37:13.525 { 00:37:13.525 "base_bdev": "BaseBdev1", 00:37:13.525 "raid_bdev": "raid_bdev1", 00:37:13.525 "method": "bdev_raid_add_base_bdev", 00:37:13.525 "req_id": 1 00:37:13.525 } 00:37:13.525 Got JSON-RPC error response 00:37:13.525 response: 00:37:13.525 { 00:37:13.525 "code": -22, 00:37:13.525 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:13.525 } 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:13.525 18:35:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:14.902 "name": "raid_bdev1", 00:37:14.902 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:14.902 "strip_size_kb": 0, 00:37:14.902 "state": "online", 00:37:14.902 "raid_level": "raid1", 00:37:14.902 "superblock": true, 00:37:14.902 "num_base_bdevs": 2, 00:37:14.902 "num_base_bdevs_discovered": 1, 00:37:14.902 "num_base_bdevs_operational": 1, 00:37:14.902 "base_bdevs_list": [ 00:37:14.902 { 00:37:14.902 "name": null, 00:37:14.902 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:14.902 "is_configured": false, 00:37:14.902 "data_offset": 0, 00:37:14.902 "data_size": 7936 00:37:14.902 }, 00:37:14.902 { 00:37:14.902 "name": "BaseBdev2", 00:37:14.902 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:14.902 "is_configured": true, 00:37:14.902 "data_offset": 256, 00:37:14.902 "data_size": 7936 00:37:14.902 } 00:37:14.902 ] 00:37:14.902 }' 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:14.902 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:15.161 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:15.161 "name": "raid_bdev1", 00:37:15.161 "uuid": "fa57d09f-a397-4971-9177-35cd41711f96", 00:37:15.161 "strip_size_kb": 0, 00:37:15.161 "state": "online", 00:37:15.161 "raid_level": "raid1", 00:37:15.161 "superblock": true, 00:37:15.161 "num_base_bdevs": 2, 00:37:15.161 "num_base_bdevs_discovered": 1, 00:37:15.161 "num_base_bdevs_operational": 1, 00:37:15.161 "base_bdevs_list": [ 00:37:15.161 { 00:37:15.161 "name": null, 00:37:15.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:15.161 "is_configured": false, 00:37:15.161 "data_offset": 0, 00:37:15.161 "data_size": 7936 00:37:15.161 }, 00:37:15.161 { 00:37:15.161 "name": "BaseBdev2", 00:37:15.161 "uuid": "249c86cb-51f5-5476-89d4-58b3ef5cf86f", 00:37:15.161 "is_configured": true, 00:37:15.161 "data_offset": 256, 00:37:15.161 "data_size": 7936 00:37:15.161 } 00:37:15.161 ] 00:37:15.161 }' 00:37:15.161 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:15.161 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:15.161 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:15.161 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:15.161 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87481 00:37:15.162 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87481 ']' 00:37:15.162 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87481 00:37:15.162 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:37:15.162 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:15.162 18:35:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87481 00:37:15.162 18:35:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:15.162 18:35:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:15.162 killing process with pid 87481 00:37:15.162 18:35:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87481' 00:37:15.162 18:35:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87481 00:37:15.162 Received shutdown signal, test time was about 60.000000 seconds 00:37:15.162 00:37:15.162 Latency(us) 00:37:15.162 [2024-12-06T18:35:46.111Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.162 [2024-12-06T18:35:46.111Z] =================================================================================================================== 00:37:15.162 [2024-12-06T18:35:46.111Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:15.162 [2024-12-06 18:35:46.008855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:15.162 18:35:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87481 00:37:15.162 [2024-12-06 18:35:46.009016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:15.162 [2024-12-06 18:35:46.009072] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:15.162 [2024-12-06 18:35:46.009086] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:37:15.420 [2024-12-06 18:35:46.344173] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:16.799 18:35:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:37:16.799 00:37:16.799 real 0m19.609s 00:37:16.799 user 0m24.987s 00:37:16.799 sys 0m3.034s 00:37:16.799 18:35:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:16.800 18:35:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:37:16.800 ************************************ 00:37:16.800 END TEST raid_rebuild_test_sb_md_separate 00:37:16.800 ************************************ 00:37:16.800 18:35:47 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:37:16.800 18:35:47 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:37:16.800 18:35:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:16.800 18:35:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:16.800 18:35:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:16.800 ************************************ 00:37:16.800 START TEST raid_state_function_test_sb_md_interleaved 00:37:16.800 ************************************ 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88169 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:37:16.800 Process raid pid: 88169 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88169' 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88169 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88169 ']' 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:16.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:16.800 18:35:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:16.800 [2024-12-06 18:35:47.725305] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:37:16.800 [2024-12-06 18:35:47.725447] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:17.059 [2024-12-06 18:35:47.913242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:17.317 [2024-12-06 18:35:48.042234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:17.576 [2024-12-06 18:35:48.285820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:17.576 [2024-12-06 18:35:48.285881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:17.834 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:17.834 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:37:17.834 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:37:17.834 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.834 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:17.834 [2024-12-06 18:35:48.554227] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:17.834 [2024-12-06 18:35:48.554297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:17.834 [2024-12-06 18:35:48.554309] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:17.834 [2024-12-06 18:35:48.554324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:17.834 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:17.835 "name": "Existed_Raid", 00:37:17.835 "uuid": "12718c33-6ad8-4cdc-bd02-1b1a16b24628", 00:37:17.835 "strip_size_kb": 0, 00:37:17.835 "state": "configuring", 00:37:17.835 "raid_level": "raid1", 00:37:17.835 "superblock": true, 00:37:17.835 "num_base_bdevs": 2, 00:37:17.835 "num_base_bdevs_discovered": 0, 00:37:17.835 "num_base_bdevs_operational": 2, 00:37:17.835 "base_bdevs_list": [ 00:37:17.835 { 00:37:17.835 "name": "BaseBdev1", 00:37:17.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.835 "is_configured": false, 00:37:17.835 "data_offset": 0, 00:37:17.835 "data_size": 0 00:37:17.835 }, 00:37:17.835 { 00:37:17.835 "name": "BaseBdev2", 00:37:17.835 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:17.835 "is_configured": false, 00:37:17.835 "data_offset": 0, 00:37:17.835 "data_size": 0 00:37:17.835 } 00:37:17.835 ] 00:37:17.835 }' 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:17.835 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.094 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:18.094 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.094 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.094 [2024-12-06 18:35:48.937594] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:18.094 [2024-12-06 18:35:48.937640] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:37:18.095 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.095 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:37:18.095 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.095 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.095 [2024-12-06 18:35:48.949579] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:37:18.095 [2024-12-06 18:35:48.949630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:37:18.095 [2024-12-06 18:35:48.949642] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:18.095 [2024-12-06 18:35:48.949659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:18.095 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.095 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:37:18.095 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.095 18:35:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.095 [2024-12-06 18:35:49.005260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:18.095 BaseBdev1 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.095 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.095 [ 00:37:18.095 { 00:37:18.095 "name": "BaseBdev1", 00:37:18.095 "aliases": [ 00:37:18.095 "deaca08a-a4d8-4505-bfa6-c9e6db0b3776" 00:37:18.095 ], 00:37:18.095 "product_name": "Malloc disk", 00:37:18.095 "block_size": 4128, 00:37:18.095 "num_blocks": 8192, 00:37:18.095 "uuid": "deaca08a-a4d8-4505-bfa6-c9e6db0b3776", 00:37:18.095 "md_size": 32, 00:37:18.095 "md_interleave": true, 00:37:18.095 "dif_type": 0, 00:37:18.095 "assigned_rate_limits": { 00:37:18.095 "rw_ios_per_sec": 0, 00:37:18.095 "rw_mbytes_per_sec": 0, 00:37:18.095 "r_mbytes_per_sec": 0, 00:37:18.095 "w_mbytes_per_sec": 0 00:37:18.095 }, 00:37:18.095 "claimed": true, 00:37:18.095 "claim_type": "exclusive_write", 00:37:18.095 "zoned": false, 00:37:18.095 "supported_io_types": { 00:37:18.095 "read": true, 00:37:18.095 "write": true, 00:37:18.095 "unmap": true, 00:37:18.095 "flush": true, 00:37:18.095 "reset": true, 00:37:18.095 "nvme_admin": false, 00:37:18.095 "nvme_io": false, 00:37:18.095 "nvme_io_md": false, 00:37:18.095 "write_zeroes": true, 00:37:18.095 "zcopy": true, 00:37:18.095 "get_zone_info": false, 00:37:18.354 "zone_management": false, 00:37:18.354 "zone_append": false, 00:37:18.354 "compare": false, 00:37:18.354 "compare_and_write": false, 00:37:18.354 "abort": true, 00:37:18.354 "seek_hole": false, 00:37:18.354 "seek_data": false, 00:37:18.354 "copy": true, 00:37:18.354 "nvme_iov_md": false 00:37:18.354 }, 00:37:18.354 "memory_domains": [ 00:37:18.354 { 00:37:18.354 "dma_device_id": "system", 00:37:18.354 "dma_device_type": 1 00:37:18.354 }, 00:37:18.354 { 00:37:18.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:18.354 "dma_device_type": 2 00:37:18.354 } 00:37:18.354 ], 00:37:18.354 "driver_specific": {} 00:37:18.354 } 00:37:18.354 ] 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:18.354 "name": "Existed_Raid", 00:37:18.354 "uuid": "d9ee1ab2-a4f9-449c-953a-f07dd84719df", 00:37:18.354 "strip_size_kb": 0, 00:37:18.354 "state": "configuring", 00:37:18.354 "raid_level": "raid1", 00:37:18.354 "superblock": true, 00:37:18.354 "num_base_bdevs": 2, 00:37:18.354 "num_base_bdevs_discovered": 1, 00:37:18.354 "num_base_bdevs_operational": 2, 00:37:18.354 "base_bdevs_list": [ 00:37:18.354 { 00:37:18.354 "name": "BaseBdev1", 00:37:18.354 "uuid": "deaca08a-a4d8-4505-bfa6-c9e6db0b3776", 00:37:18.354 "is_configured": true, 00:37:18.354 "data_offset": 256, 00:37:18.354 "data_size": 7936 00:37:18.354 }, 00:37:18.354 { 00:37:18.354 "name": "BaseBdev2", 00:37:18.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.354 "is_configured": false, 00:37:18.354 "data_offset": 0, 00:37:18.354 "data_size": 0 00:37:18.354 } 00:37:18.354 ] 00:37:18.354 }' 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:18.354 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.613 [2024-12-06 18:35:49.456696] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:37:18.613 [2024-12-06 18:35:49.456745] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.613 [2024-12-06 18:35:49.468747] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:18.613 [2024-12-06 18:35:49.471144] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:37:18.613 [2024-12-06 18:35:49.471204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:18.613 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:18.614 "name": "Existed_Raid", 00:37:18.614 "uuid": "f8d7cb23-2c0d-4416-b13a-47f5c66f1295", 00:37:18.614 "strip_size_kb": 0, 00:37:18.614 "state": "configuring", 00:37:18.614 "raid_level": "raid1", 00:37:18.614 "superblock": true, 00:37:18.614 "num_base_bdevs": 2, 00:37:18.614 "num_base_bdevs_discovered": 1, 00:37:18.614 "num_base_bdevs_operational": 2, 00:37:18.614 "base_bdevs_list": [ 00:37:18.614 { 00:37:18.614 "name": "BaseBdev1", 00:37:18.614 "uuid": "deaca08a-a4d8-4505-bfa6-c9e6db0b3776", 00:37:18.614 "is_configured": true, 00:37:18.614 "data_offset": 256, 00:37:18.614 "data_size": 7936 00:37:18.614 }, 00:37:18.614 { 00:37:18.614 "name": "BaseBdev2", 00:37:18.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:18.614 "is_configured": false, 00:37:18.614 "data_offset": 0, 00:37:18.614 "data_size": 0 00:37:18.614 } 00:37:18.614 ] 00:37:18.614 }' 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:18.614 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.183 [2024-12-06 18:35:49.960864] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:19.183 [2024-12-06 18:35:49.961100] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:19.183 [2024-12-06 18:35:49.961116] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:19.183 [2024-12-06 18:35:49.961233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:19.183 [2024-12-06 18:35:49.961340] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:19.183 [2024-12-06 18:35:49.961355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:37:19.183 [2024-12-06 18:35:49.961424] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:19.183 BaseBdev2 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.183 18:35:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.183 [ 00:37:19.183 { 00:37:19.183 "name": "BaseBdev2", 00:37:19.183 "aliases": [ 00:37:19.183 "ebdb368d-d33f-4c4f-b291-0a746dc64f19" 00:37:19.183 ], 00:37:19.183 "product_name": "Malloc disk", 00:37:19.183 "block_size": 4128, 00:37:19.183 "num_blocks": 8192, 00:37:19.183 "uuid": "ebdb368d-d33f-4c4f-b291-0a746dc64f19", 00:37:19.183 "md_size": 32, 00:37:19.183 "md_interleave": true, 00:37:19.183 "dif_type": 0, 00:37:19.183 "assigned_rate_limits": { 00:37:19.183 "rw_ios_per_sec": 0, 00:37:19.183 "rw_mbytes_per_sec": 0, 00:37:19.183 "r_mbytes_per_sec": 0, 00:37:19.183 "w_mbytes_per_sec": 0 00:37:19.183 }, 00:37:19.183 "claimed": true, 00:37:19.183 "claim_type": "exclusive_write", 00:37:19.183 "zoned": false, 00:37:19.183 "supported_io_types": { 00:37:19.183 "read": true, 00:37:19.183 "write": true, 00:37:19.183 "unmap": true, 00:37:19.183 "flush": true, 00:37:19.183 "reset": true, 00:37:19.183 "nvme_admin": false, 00:37:19.183 "nvme_io": false, 00:37:19.183 "nvme_io_md": false, 00:37:19.183 "write_zeroes": true, 00:37:19.183 "zcopy": true, 00:37:19.183 "get_zone_info": false, 00:37:19.183 "zone_management": false, 00:37:19.183 "zone_append": false, 00:37:19.183 "compare": false, 00:37:19.183 "compare_and_write": false, 00:37:19.183 "abort": true, 00:37:19.183 "seek_hole": false, 00:37:19.183 "seek_data": false, 00:37:19.183 "copy": true, 00:37:19.183 "nvme_iov_md": false 00:37:19.183 }, 00:37:19.183 "memory_domains": [ 00:37:19.183 { 00:37:19.183 "dma_device_id": "system", 00:37:19.183 "dma_device_type": 1 00:37:19.183 }, 00:37:19.183 { 00:37:19.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.183 "dma_device_type": 2 00:37:19.183 } 00:37:19.183 ], 00:37:19.183 "driver_specific": {} 00:37:19.183 } 00:37:19.183 ] 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:19.183 "name": "Existed_Raid", 00:37:19.183 "uuid": "f8d7cb23-2c0d-4416-b13a-47f5c66f1295", 00:37:19.183 "strip_size_kb": 0, 00:37:19.183 "state": "online", 00:37:19.183 "raid_level": "raid1", 00:37:19.183 "superblock": true, 00:37:19.183 "num_base_bdevs": 2, 00:37:19.183 "num_base_bdevs_discovered": 2, 00:37:19.183 "num_base_bdevs_operational": 2, 00:37:19.183 "base_bdevs_list": [ 00:37:19.183 { 00:37:19.183 "name": "BaseBdev1", 00:37:19.183 "uuid": "deaca08a-a4d8-4505-bfa6-c9e6db0b3776", 00:37:19.183 "is_configured": true, 00:37:19.183 "data_offset": 256, 00:37:19.183 "data_size": 7936 00:37:19.183 }, 00:37:19.183 { 00:37:19.183 "name": "BaseBdev2", 00:37:19.183 "uuid": "ebdb368d-d33f-4c4f-b291-0a746dc64f19", 00:37:19.183 "is_configured": true, 00:37:19.183 "data_offset": 256, 00:37:19.183 "data_size": 7936 00:37:19.183 } 00:37:19.183 ] 00:37:19.183 }' 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:19.183 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.752 [2024-12-06 18:35:50.436571] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.752 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:19.752 "name": "Existed_Raid", 00:37:19.752 "aliases": [ 00:37:19.752 "f8d7cb23-2c0d-4416-b13a-47f5c66f1295" 00:37:19.752 ], 00:37:19.752 "product_name": "Raid Volume", 00:37:19.752 "block_size": 4128, 00:37:19.752 "num_blocks": 7936, 00:37:19.752 "uuid": "f8d7cb23-2c0d-4416-b13a-47f5c66f1295", 00:37:19.752 "md_size": 32, 00:37:19.752 "md_interleave": true, 00:37:19.752 "dif_type": 0, 00:37:19.752 "assigned_rate_limits": { 00:37:19.752 "rw_ios_per_sec": 0, 00:37:19.752 "rw_mbytes_per_sec": 0, 00:37:19.752 "r_mbytes_per_sec": 0, 00:37:19.752 "w_mbytes_per_sec": 0 00:37:19.752 }, 00:37:19.752 "claimed": false, 00:37:19.752 "zoned": false, 00:37:19.752 "supported_io_types": { 00:37:19.752 "read": true, 00:37:19.752 "write": true, 00:37:19.752 "unmap": false, 00:37:19.752 "flush": false, 00:37:19.752 "reset": true, 00:37:19.752 "nvme_admin": false, 00:37:19.752 "nvme_io": false, 00:37:19.752 "nvme_io_md": false, 00:37:19.752 "write_zeroes": true, 00:37:19.752 "zcopy": false, 00:37:19.752 "get_zone_info": false, 00:37:19.752 "zone_management": false, 00:37:19.752 "zone_append": false, 00:37:19.752 "compare": false, 00:37:19.752 "compare_and_write": false, 00:37:19.752 "abort": false, 00:37:19.752 "seek_hole": false, 00:37:19.752 "seek_data": false, 00:37:19.752 "copy": false, 00:37:19.752 "nvme_iov_md": false 00:37:19.752 }, 00:37:19.752 "memory_domains": [ 00:37:19.752 { 00:37:19.752 "dma_device_id": "system", 00:37:19.752 "dma_device_type": 1 00:37:19.752 }, 00:37:19.752 { 00:37:19.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.752 "dma_device_type": 2 00:37:19.752 }, 00:37:19.752 { 00:37:19.752 "dma_device_id": "system", 00:37:19.752 "dma_device_type": 1 00:37:19.752 }, 00:37:19.752 { 00:37:19.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:19.752 "dma_device_type": 2 00:37:19.752 } 00:37:19.752 ], 00:37:19.752 "driver_specific": { 00:37:19.752 "raid": { 00:37:19.752 "uuid": "f8d7cb23-2c0d-4416-b13a-47f5c66f1295", 00:37:19.752 "strip_size_kb": 0, 00:37:19.753 "state": "online", 00:37:19.753 "raid_level": "raid1", 00:37:19.753 "superblock": true, 00:37:19.753 "num_base_bdevs": 2, 00:37:19.753 "num_base_bdevs_discovered": 2, 00:37:19.753 "num_base_bdevs_operational": 2, 00:37:19.753 "base_bdevs_list": [ 00:37:19.753 { 00:37:19.753 "name": "BaseBdev1", 00:37:19.753 "uuid": "deaca08a-a4d8-4505-bfa6-c9e6db0b3776", 00:37:19.753 "is_configured": true, 00:37:19.753 "data_offset": 256, 00:37:19.753 "data_size": 7936 00:37:19.753 }, 00:37:19.753 { 00:37:19.753 "name": "BaseBdev2", 00:37:19.753 "uuid": "ebdb368d-d33f-4c4f-b291-0a746dc64f19", 00:37:19.753 "is_configured": true, 00:37:19.753 "data_offset": 256, 00:37:19.753 "data_size": 7936 00:37:19.753 } 00:37:19.753 ] 00:37:19.753 } 00:37:19.753 } 00:37:19.753 }' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:37:19.753 BaseBdev2' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:19.753 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:19.753 [2024-12-06 18:35:50.648004] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:20.012 "name": "Existed_Raid", 00:37:20.012 "uuid": "f8d7cb23-2c0d-4416-b13a-47f5c66f1295", 00:37:20.012 "strip_size_kb": 0, 00:37:20.012 "state": "online", 00:37:20.012 "raid_level": "raid1", 00:37:20.012 "superblock": true, 00:37:20.012 "num_base_bdevs": 2, 00:37:20.012 "num_base_bdevs_discovered": 1, 00:37:20.012 "num_base_bdevs_operational": 1, 00:37:20.012 "base_bdevs_list": [ 00:37:20.012 { 00:37:20.012 "name": null, 00:37:20.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:20.012 "is_configured": false, 00:37:20.012 "data_offset": 0, 00:37:20.012 "data_size": 7936 00:37:20.012 }, 00:37:20.012 { 00:37:20.012 "name": "BaseBdev2", 00:37:20.012 "uuid": "ebdb368d-d33f-4c4f-b291-0a746dc64f19", 00:37:20.012 "is_configured": true, 00:37:20.012 "data_offset": 256, 00:37:20.012 "data_size": 7936 00:37:20.012 } 00:37:20.012 ] 00:37:20.012 }' 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:20.012 18:35:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:20.270 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:37:20.270 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:20.270 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.270 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:37:20.270 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.270 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:20.270 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:20.529 [2024-12-06 18:35:51.247679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:37:20.529 [2024-12-06 18:35:51.247816] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:20.529 [2024-12-06 18:35:51.351449] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:20.529 [2024-12-06 18:35:51.351507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:20.529 [2024-12-06 18:35:51.351525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88169 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88169 ']' 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88169 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88169 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:20.529 killing process with pid 88169 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88169' 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88169 00:37:20.529 [2024-12-06 18:35:51.440279] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:20.529 18:35:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88169 00:37:20.529 [2024-12-06 18:35:51.458189] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:21.912 ************************************ 00:37:21.912 END TEST raid_state_function_test_sb_md_interleaved 00:37:21.912 ************************************ 00:37:21.912 18:35:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:37:21.912 00:37:21.912 real 0m5.054s 00:37:21.912 user 0m7.060s 00:37:21.912 sys 0m1.061s 00:37:21.912 18:35:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:21.912 18:35:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:21.912 18:35:52 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:37:21.912 18:35:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:21.912 18:35:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:21.912 18:35:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:21.912 ************************************ 00:37:21.912 START TEST raid_superblock_test_md_interleaved 00:37:21.912 ************************************ 00:37:21.912 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:37:21.912 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:37:21.912 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:37:21.912 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=88426 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 88426 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88426 ']' 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:21.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:21.913 18:35:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:21.913 [2024-12-06 18:35:52.855947] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:37:21.913 [2024-12-06 18:35:52.856306] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88426 ] 00:37:22.228 [2024-12-06 18:35:53.043912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:22.487 [2024-12-06 18:35:53.179584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:22.487 [2024-12-06 18:35:53.415115] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:22.487 [2024-12-06 18:35:53.415421] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:22.747 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:22.747 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:37:22.747 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:37:22.747 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:22.747 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:37:22.747 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:37:22.747 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:37:22.747 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.009 malloc1 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.009 [2024-12-06 18:35:53.757876] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:23.009 [2024-12-06 18:35:53.757943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.009 [2024-12-06 18:35:53.757988] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:23.009 [2024-12-06 18:35:53.758002] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.009 [2024-12-06 18:35:53.760455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.009 [2024-12-06 18:35:53.760495] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:23.009 pt1 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.009 malloc2 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.009 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.009 [2024-12-06 18:35:53.820820] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:23.010 [2024-12-06 18:35:53.821006] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.010 [2024-12-06 18:35:53.821068] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:23.010 [2024-12-06 18:35:53.821167] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.010 [2024-12-06 18:35:53.823592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.010 [2024-12-06 18:35:53.823728] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:23.010 pt2 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.010 [2024-12-06 18:35:53.832848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:23.010 [2024-12-06 18:35:53.835166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:23.010 [2024-12-06 18:35:53.835541] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:23.010 [2024-12-06 18:35:53.835688] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:23.010 [2024-12-06 18:35:53.835813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:37:23.010 [2024-12-06 18:35:53.836069] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:23.010 [2024-12-06 18:35:53.836115] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:23.010 [2024-12-06 18:35:53.836409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:23.010 "name": "raid_bdev1", 00:37:23.010 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:23.010 "strip_size_kb": 0, 00:37:23.010 "state": "online", 00:37:23.010 "raid_level": "raid1", 00:37:23.010 "superblock": true, 00:37:23.010 "num_base_bdevs": 2, 00:37:23.010 "num_base_bdevs_discovered": 2, 00:37:23.010 "num_base_bdevs_operational": 2, 00:37:23.010 "base_bdevs_list": [ 00:37:23.010 { 00:37:23.010 "name": "pt1", 00:37:23.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:23.010 "is_configured": true, 00:37:23.010 "data_offset": 256, 00:37:23.010 "data_size": 7936 00:37:23.010 }, 00:37:23.010 { 00:37:23.010 "name": "pt2", 00:37:23.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:23.010 "is_configured": true, 00:37:23.010 "data_offset": 256, 00:37:23.010 "data_size": 7936 00:37:23.010 } 00:37:23.010 ] 00:37:23.010 }' 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:23.010 18:35:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.581 [2024-12-06 18:35:54.228585] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:23.581 "name": "raid_bdev1", 00:37:23.581 "aliases": [ 00:37:23.581 "cc8a26ce-758c-484c-94d5-e0ea2095ba39" 00:37:23.581 ], 00:37:23.581 "product_name": "Raid Volume", 00:37:23.581 "block_size": 4128, 00:37:23.581 "num_blocks": 7936, 00:37:23.581 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:23.581 "md_size": 32, 00:37:23.581 "md_interleave": true, 00:37:23.581 "dif_type": 0, 00:37:23.581 "assigned_rate_limits": { 00:37:23.581 "rw_ios_per_sec": 0, 00:37:23.581 "rw_mbytes_per_sec": 0, 00:37:23.581 "r_mbytes_per_sec": 0, 00:37:23.581 "w_mbytes_per_sec": 0 00:37:23.581 }, 00:37:23.581 "claimed": false, 00:37:23.581 "zoned": false, 00:37:23.581 "supported_io_types": { 00:37:23.581 "read": true, 00:37:23.581 "write": true, 00:37:23.581 "unmap": false, 00:37:23.581 "flush": false, 00:37:23.581 "reset": true, 00:37:23.581 "nvme_admin": false, 00:37:23.581 "nvme_io": false, 00:37:23.581 "nvme_io_md": false, 00:37:23.581 "write_zeroes": true, 00:37:23.581 "zcopy": false, 00:37:23.581 "get_zone_info": false, 00:37:23.581 "zone_management": false, 00:37:23.581 "zone_append": false, 00:37:23.581 "compare": false, 00:37:23.581 "compare_and_write": false, 00:37:23.581 "abort": false, 00:37:23.581 "seek_hole": false, 00:37:23.581 "seek_data": false, 00:37:23.581 "copy": false, 00:37:23.581 "nvme_iov_md": false 00:37:23.581 }, 00:37:23.581 "memory_domains": [ 00:37:23.581 { 00:37:23.581 "dma_device_id": "system", 00:37:23.581 "dma_device_type": 1 00:37:23.581 }, 00:37:23.581 { 00:37:23.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:23.581 "dma_device_type": 2 00:37:23.581 }, 00:37:23.581 { 00:37:23.581 "dma_device_id": "system", 00:37:23.581 "dma_device_type": 1 00:37:23.581 }, 00:37:23.581 { 00:37:23.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:23.581 "dma_device_type": 2 00:37:23.581 } 00:37:23.581 ], 00:37:23.581 "driver_specific": { 00:37:23.581 "raid": { 00:37:23.581 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:23.581 "strip_size_kb": 0, 00:37:23.581 "state": "online", 00:37:23.581 "raid_level": "raid1", 00:37:23.581 "superblock": true, 00:37:23.581 "num_base_bdevs": 2, 00:37:23.581 "num_base_bdevs_discovered": 2, 00:37:23.581 "num_base_bdevs_operational": 2, 00:37:23.581 "base_bdevs_list": [ 00:37:23.581 { 00:37:23.581 "name": "pt1", 00:37:23.581 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:23.581 "is_configured": true, 00:37:23.581 "data_offset": 256, 00:37:23.581 "data_size": 7936 00:37:23.581 }, 00:37:23.581 { 00:37:23.581 "name": "pt2", 00:37:23.581 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:23.581 "is_configured": true, 00:37:23.581 "data_offset": 256, 00:37:23.581 "data_size": 7936 00:37:23.581 } 00:37:23.581 ] 00:37:23.581 } 00:37:23.581 } 00:37:23.581 }' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:23.581 pt2' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.581 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.581 [2024-12-06 18:35:54.432318] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=cc8a26ce-758c-484c-94d5-e0ea2095ba39 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z cc8a26ce-758c-484c-94d5-e0ea2095ba39 ']' 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.582 [2024-12-06 18:35:54.467985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:23.582 [2024-12-06 18:35:54.468010] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:23.582 [2024-12-06 18:35:54.468091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:23.582 [2024-12-06 18:35:54.468169] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:23.582 [2024-12-06 18:35:54.468186] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.582 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.843 [2024-12-06 18:35:54.595832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:37:23.843 [2024-12-06 18:35:54.598326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:37:23.843 [2024-12-06 18:35:54.598433] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:37:23.843 [2024-12-06 18:35:54.598596] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:37:23.843 [2024-12-06 18:35:54.598788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:23.843 [2024-12-06 18:35:54.598826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:37:23.843 request: 00:37:23.843 { 00:37:23.843 "name": "raid_bdev1", 00:37:23.843 "raid_level": "raid1", 00:37:23.843 "base_bdevs": [ 00:37:23.843 "malloc1", 00:37:23.843 "malloc2" 00:37:23.843 ], 00:37:23.843 "superblock": false, 00:37:23.843 "method": "bdev_raid_create", 00:37:23.843 "req_id": 1 00:37:23.843 } 00:37:23.843 Got JSON-RPC error response 00:37:23.843 response: 00:37:23.843 { 00:37:23.843 "code": -17, 00:37:23.843 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:37:23.843 } 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.843 [2024-12-06 18:35:54.659729] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:23.843 [2024-12-06 18:35:54.659782] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:23.843 [2024-12-06 18:35:54.659801] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:37:23.843 [2024-12-06 18:35:54.659816] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:23.843 [2024-12-06 18:35:54.662253] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:23.843 [2024-12-06 18:35:54.662296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:23.843 [2024-12-06 18:35:54.662346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:23.843 [2024-12-06 18:35:54.662406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:23.843 pt1 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:23.843 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:23.844 "name": "raid_bdev1", 00:37:23.844 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:23.844 "strip_size_kb": 0, 00:37:23.844 "state": "configuring", 00:37:23.844 "raid_level": "raid1", 00:37:23.844 "superblock": true, 00:37:23.844 "num_base_bdevs": 2, 00:37:23.844 "num_base_bdevs_discovered": 1, 00:37:23.844 "num_base_bdevs_operational": 2, 00:37:23.844 "base_bdevs_list": [ 00:37:23.844 { 00:37:23.844 "name": "pt1", 00:37:23.844 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:23.844 "is_configured": true, 00:37:23.844 "data_offset": 256, 00:37:23.844 "data_size": 7936 00:37:23.844 }, 00:37:23.844 { 00:37:23.844 "name": null, 00:37:23.844 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:23.844 "is_configured": false, 00:37:23.844 "data_offset": 256, 00:37:23.844 "data_size": 7936 00:37:23.844 } 00:37:23.844 ] 00:37:23.844 }' 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:23.844 18:35:54 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.411 [2024-12-06 18:35:55.099270] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:24.411 [2024-12-06 18:35:55.099457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:24.411 [2024-12-06 18:35:55.099487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:24.411 [2024-12-06 18:35:55.099503] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:24.411 [2024-12-06 18:35:55.099670] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:24.411 [2024-12-06 18:35:55.099692] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:24.411 [2024-12-06 18:35:55.099742] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:24.411 [2024-12-06 18:35:55.099765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:24.411 [2024-12-06 18:35:55.099854] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:37:24.411 [2024-12-06 18:35:55.099868] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:24.411 [2024-12-06 18:35:55.099943] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:24.411 [2024-12-06 18:35:55.100014] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:37:24.411 [2024-12-06 18:35:55.100024] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:37:24.411 [2024-12-06 18:35:55.100089] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:24.411 pt2 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:24.411 "name": "raid_bdev1", 00:37:24.411 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:24.411 "strip_size_kb": 0, 00:37:24.411 "state": "online", 00:37:24.411 "raid_level": "raid1", 00:37:24.411 "superblock": true, 00:37:24.411 "num_base_bdevs": 2, 00:37:24.411 "num_base_bdevs_discovered": 2, 00:37:24.411 "num_base_bdevs_operational": 2, 00:37:24.411 "base_bdevs_list": [ 00:37:24.411 { 00:37:24.411 "name": "pt1", 00:37:24.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:24.411 "is_configured": true, 00:37:24.411 "data_offset": 256, 00:37:24.411 "data_size": 7936 00:37:24.411 }, 00:37:24.411 { 00:37:24.411 "name": "pt2", 00:37:24.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:24.411 "is_configured": true, 00:37:24.411 "data_offset": 256, 00:37:24.411 "data_size": 7936 00:37:24.411 } 00:37:24.411 ] 00:37:24.411 }' 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:24.411 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.669 [2024-12-06 18:35:55.515031] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:37:24.669 "name": "raid_bdev1", 00:37:24.669 "aliases": [ 00:37:24.669 "cc8a26ce-758c-484c-94d5-e0ea2095ba39" 00:37:24.669 ], 00:37:24.669 "product_name": "Raid Volume", 00:37:24.669 "block_size": 4128, 00:37:24.669 "num_blocks": 7936, 00:37:24.669 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:24.669 "md_size": 32, 00:37:24.669 "md_interleave": true, 00:37:24.669 "dif_type": 0, 00:37:24.669 "assigned_rate_limits": { 00:37:24.669 "rw_ios_per_sec": 0, 00:37:24.669 "rw_mbytes_per_sec": 0, 00:37:24.669 "r_mbytes_per_sec": 0, 00:37:24.669 "w_mbytes_per_sec": 0 00:37:24.669 }, 00:37:24.669 "claimed": false, 00:37:24.669 "zoned": false, 00:37:24.669 "supported_io_types": { 00:37:24.669 "read": true, 00:37:24.669 "write": true, 00:37:24.669 "unmap": false, 00:37:24.669 "flush": false, 00:37:24.669 "reset": true, 00:37:24.669 "nvme_admin": false, 00:37:24.669 "nvme_io": false, 00:37:24.669 "nvme_io_md": false, 00:37:24.669 "write_zeroes": true, 00:37:24.669 "zcopy": false, 00:37:24.669 "get_zone_info": false, 00:37:24.669 "zone_management": false, 00:37:24.669 "zone_append": false, 00:37:24.669 "compare": false, 00:37:24.669 "compare_and_write": false, 00:37:24.669 "abort": false, 00:37:24.669 "seek_hole": false, 00:37:24.669 "seek_data": false, 00:37:24.669 "copy": false, 00:37:24.669 "nvme_iov_md": false 00:37:24.669 }, 00:37:24.669 "memory_domains": [ 00:37:24.669 { 00:37:24.669 "dma_device_id": "system", 00:37:24.669 "dma_device_type": 1 00:37:24.669 }, 00:37:24.669 { 00:37:24.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:24.669 "dma_device_type": 2 00:37:24.669 }, 00:37:24.669 { 00:37:24.669 "dma_device_id": "system", 00:37:24.669 "dma_device_type": 1 00:37:24.669 }, 00:37:24.669 { 00:37:24.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:37:24.669 "dma_device_type": 2 00:37:24.669 } 00:37:24.669 ], 00:37:24.669 "driver_specific": { 00:37:24.669 "raid": { 00:37:24.669 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:24.669 "strip_size_kb": 0, 00:37:24.669 "state": "online", 00:37:24.669 "raid_level": "raid1", 00:37:24.669 "superblock": true, 00:37:24.669 "num_base_bdevs": 2, 00:37:24.669 "num_base_bdevs_discovered": 2, 00:37:24.669 "num_base_bdevs_operational": 2, 00:37:24.669 "base_bdevs_list": [ 00:37:24.669 { 00:37:24.669 "name": "pt1", 00:37:24.669 "uuid": "00000000-0000-0000-0000-000000000001", 00:37:24.669 "is_configured": true, 00:37:24.669 "data_offset": 256, 00:37:24.669 "data_size": 7936 00:37:24.669 }, 00:37:24.669 { 00:37:24.669 "name": "pt2", 00:37:24.669 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:24.669 "is_configured": true, 00:37:24.669 "data_offset": 256, 00:37:24.669 "data_size": 7936 00:37:24.669 } 00:37:24.669 ] 00:37:24.669 } 00:37:24.669 } 00:37:24.669 }' 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:37:24.669 pt2' 00:37:24.669 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.928 [2024-12-06 18:35:55.758850] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' cc8a26ce-758c-484c-94d5-e0ea2095ba39 '!=' cc8a26ce-758c-484c-94d5-e0ea2095ba39 ']' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.928 [2024-12-06 18:35:55.798569] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:24.928 "name": "raid_bdev1", 00:37:24.928 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:24.928 "strip_size_kb": 0, 00:37:24.928 "state": "online", 00:37:24.928 "raid_level": "raid1", 00:37:24.928 "superblock": true, 00:37:24.928 "num_base_bdevs": 2, 00:37:24.928 "num_base_bdevs_discovered": 1, 00:37:24.928 "num_base_bdevs_operational": 1, 00:37:24.928 "base_bdevs_list": [ 00:37:24.928 { 00:37:24.928 "name": null, 00:37:24.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:24.928 "is_configured": false, 00:37:24.928 "data_offset": 0, 00:37:24.928 "data_size": 7936 00:37:24.928 }, 00:37:24.928 { 00:37:24.928 "name": "pt2", 00:37:24.928 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:24.928 "is_configured": true, 00:37:24.928 "data_offset": 256, 00:37:24.928 "data_size": 7936 00:37:24.928 } 00:37:24.928 ] 00:37:24.928 }' 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:24.928 18:35:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.497 [2024-12-06 18:35:56.209989] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:25.497 [2024-12-06 18:35:56.210019] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:25.497 [2024-12-06 18:35:56.210101] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:25.497 [2024-12-06 18:35:56.210177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:25.497 [2024-12-06 18:35:56.210194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.497 [2024-12-06 18:35:56.269916] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:37:25.497 [2024-12-06 18:35:56.269976] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:25.497 [2024-12-06 18:35:56.270011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:37:25.497 [2024-12-06 18:35:56.270027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:25.497 [2024-12-06 18:35:56.272671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:25.497 [2024-12-06 18:35:56.272817] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:37:25.497 [2024-12-06 18:35:56.273005] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:37:25.497 [2024-12-06 18:35:56.273212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:25.497 [2024-12-06 18:35:56.273334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:37:25.497 [2024-12-06 18:35:56.273490] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:25.497 [2024-12-06 18:35:56.273626] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:25.497 [2024-12-06 18:35:56.273789] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:37:25.497 [2024-12-06 18:35:56.273829] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:37:25.497 [2024-12-06 18:35:56.274022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:25.497 pt2 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.497 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:25.497 "name": "raid_bdev1", 00:37:25.497 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:25.497 "strip_size_kb": 0, 00:37:25.497 "state": "online", 00:37:25.497 "raid_level": "raid1", 00:37:25.497 "superblock": true, 00:37:25.497 "num_base_bdevs": 2, 00:37:25.497 "num_base_bdevs_discovered": 1, 00:37:25.497 "num_base_bdevs_operational": 1, 00:37:25.497 "base_bdevs_list": [ 00:37:25.498 { 00:37:25.498 "name": null, 00:37:25.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:25.498 "is_configured": false, 00:37:25.498 "data_offset": 256, 00:37:25.498 "data_size": 7936 00:37:25.498 }, 00:37:25.498 { 00:37:25.498 "name": "pt2", 00:37:25.498 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:25.498 "is_configured": true, 00:37:25.498 "data_offset": 256, 00:37:25.498 "data_size": 7936 00:37:25.498 } 00:37:25.498 ] 00:37:25.498 }' 00:37:25.498 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:25.498 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.757 [2024-12-06 18:35:56.669449] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:25.757 [2024-12-06 18:35:56.669477] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:25.757 [2024-12-06 18:35:56.669529] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:25.757 [2024-12-06 18:35:56.669578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:25.757 [2024-12-06 18:35:56.669589] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:25.757 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:26.017 [2024-12-06 18:35:56.725412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:37:26.017 [2024-12-06 18:35:56.725463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:26.017 [2024-12-06 18:35:56.725484] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:37:26.017 [2024-12-06 18:35:56.725495] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:26.017 [2024-12-06 18:35:56.727915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:26.017 [2024-12-06 18:35:56.727955] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:37:26.017 [2024-12-06 18:35:56.728007] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:37:26.017 [2024-12-06 18:35:56.728057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:37:26.017 [2024-12-06 18:35:56.728166] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:37:26.017 [2024-12-06 18:35:56.728178] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:26.017 [2024-12-06 18:35:56.728196] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:37:26.017 [2024-12-06 18:35:56.728260] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:37:26.017 [2024-12-06 18:35:56.728334] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:37:26.017 [2024-12-06 18:35:56.728344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:26.017 [2024-12-06 18:35:56.728410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:26.017 [2024-12-06 18:35:56.728467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:37:26.017 [2024-12-06 18:35:56.728478] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:37:26.017 [2024-12-06 18:35:56.728543] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:26.017 pt1 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:26.017 "name": "raid_bdev1", 00:37:26.017 "uuid": "cc8a26ce-758c-484c-94d5-e0ea2095ba39", 00:37:26.017 "strip_size_kb": 0, 00:37:26.017 "state": "online", 00:37:26.017 "raid_level": "raid1", 00:37:26.017 "superblock": true, 00:37:26.017 "num_base_bdevs": 2, 00:37:26.017 "num_base_bdevs_discovered": 1, 00:37:26.017 "num_base_bdevs_operational": 1, 00:37:26.017 "base_bdevs_list": [ 00:37:26.017 { 00:37:26.017 "name": null, 00:37:26.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:26.017 "is_configured": false, 00:37:26.017 "data_offset": 256, 00:37:26.017 "data_size": 7936 00:37:26.017 }, 00:37:26.017 { 00:37:26.017 "name": "pt2", 00:37:26.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:37:26.017 "is_configured": true, 00:37:26.017 "data_offset": 256, 00:37:26.017 "data_size": 7936 00:37:26.017 } 00:37:26.017 ] 00:37:26.017 }' 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:26.017 18:35:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:26.276 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:37:26.276 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:37:26.276 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.276 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:26.276 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.276 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:37:26.276 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:26.277 [2024-12-06 18:35:57.161071] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' cc8a26ce-758c-484c-94d5-e0ea2095ba39 '!=' cc8a26ce-758c-484c-94d5-e0ea2095ba39 ']' 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 88426 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88426 ']' 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88426 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:26.277 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88426 00:37:26.537 killing process with pid 88426 00:37:26.537 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:26.537 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:26.537 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88426' 00:37:26.537 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 88426 00:37:26.537 [2024-12-06 18:35:57.235302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:26.537 [2024-12-06 18:35:57.235395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:26.537 18:35:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 88426 00:37:26.537 [2024-12-06 18:35:57.235449] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:26.537 [2024-12-06 18:35:57.235470] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:37:26.537 [2024-12-06 18:35:57.455125] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:27.918 ************************************ 00:37:27.918 END TEST raid_superblock_test_md_interleaved 00:37:27.918 ************************************ 00:37:27.918 18:35:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:37:27.918 00:37:27.918 real 0m5.914s 00:37:27.918 user 0m8.703s 00:37:27.918 sys 0m1.287s 00:37:27.918 18:35:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:27.918 18:35:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:27.918 18:35:58 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:37:27.918 18:35:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:37:27.918 18:35:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:27.918 18:35:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:27.918 ************************************ 00:37:27.918 START TEST raid_rebuild_test_sb_md_interleaved 00:37:27.918 ************************************ 00:37:27.918 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:37:27.918 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:37:27.918 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:37:27.918 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:37:27.918 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:37:27.918 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:37:27.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88749 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88749 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88749 ']' 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:27.919 18:35:58 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:27.919 [2024-12-06 18:35:58.863302] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:37:27.919 [2024-12-06 18:35:58.863631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:37:27.919 Zero copy mechanism will not be used. 00:37:27.919 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88749 ] 00:37:28.178 [2024-12-06 18:35:59.050851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:28.439 [2024-12-06 18:35:59.185915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:28.698 [2024-12-06 18:35:59.430385] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:28.698 [2024-12-06 18:35:59.430597] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.959 BaseBdev1_malloc 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.959 [2024-12-06 18:35:59.733600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:28.959 [2024-12-06 18:35:59.733671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:28.959 [2024-12-06 18:35:59.733716] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:37:28.959 [2024-12-06 18:35:59.733733] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:28.959 [2024-12-06 18:35:59.736233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:28.959 [2024-12-06 18:35:59.736448] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:28.959 BaseBdev1 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.959 BaseBdev2_malloc 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.959 [2024-12-06 18:35:59.796284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:37:28.959 [2024-12-06 18:35:59.796348] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:28.959 [2024-12-06 18:35:59.796371] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:37:28.959 [2024-12-06 18:35:59.796388] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:28.959 [2024-12-06 18:35:59.798788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:28.959 [2024-12-06 18:35:59.798947] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:37:28.959 BaseBdev2 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.959 spare_malloc 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.959 spare_delay 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:28.959 [2024-12-06 18:35:59.896048] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:28.959 [2024-12-06 18:35:59.896112] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:28.959 [2024-12-06 18:35:59.896135] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:37:28.959 [2024-12-06 18:35:59.896165] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:28.959 [2024-12-06 18:35:59.898562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:28.959 [2024-12-06 18:35:59.898736] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:28.959 spare 00:37:28.959 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:28.960 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:37:28.960 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:28.960 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.219 [2024-12-06 18:35:59.908088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:29.219 [2024-12-06 18:35:59.910492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:29.219 [2024-12-06 18:35:59.910715] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:37:29.219 [2024-12-06 18:35:59.910733] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:29.219 [2024-12-06 18:35:59.910809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:37:29.219 [2024-12-06 18:35:59.910883] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:37:29.219 [2024-12-06 18:35:59.910893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:37:29.219 [2024-12-06 18:35:59.910964] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:29.219 "name": "raid_bdev1", 00:37:29.219 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:29.219 "strip_size_kb": 0, 00:37:29.219 "state": "online", 00:37:29.219 "raid_level": "raid1", 00:37:29.219 "superblock": true, 00:37:29.219 "num_base_bdevs": 2, 00:37:29.219 "num_base_bdevs_discovered": 2, 00:37:29.219 "num_base_bdevs_operational": 2, 00:37:29.219 "base_bdevs_list": [ 00:37:29.219 { 00:37:29.219 "name": "BaseBdev1", 00:37:29.219 "uuid": "403e87f3-4234-57a0-be11-3f5d207f342f", 00:37:29.219 "is_configured": true, 00:37:29.219 "data_offset": 256, 00:37:29.219 "data_size": 7936 00:37:29.219 }, 00:37:29.219 { 00:37:29.219 "name": "BaseBdev2", 00:37:29.219 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:29.219 "is_configured": true, 00:37:29.219 "data_offset": 256, 00:37:29.219 "data_size": 7936 00:37:29.219 } 00:37:29.219 ] 00:37:29.219 }' 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:29.219 18:35:59 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:37:29.479 [2024-12-06 18:36:00.327849] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.479 [2024-12-06 18:36:00.403350] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.479 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.739 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.739 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:29.739 "name": "raid_bdev1", 00:37:29.739 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:29.739 "strip_size_kb": 0, 00:37:29.739 "state": "online", 00:37:29.739 "raid_level": "raid1", 00:37:29.739 "superblock": true, 00:37:29.739 "num_base_bdevs": 2, 00:37:29.739 "num_base_bdevs_discovered": 1, 00:37:29.739 "num_base_bdevs_operational": 1, 00:37:29.739 "base_bdevs_list": [ 00:37:29.739 { 00:37:29.739 "name": null, 00:37:29.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:29.739 "is_configured": false, 00:37:29.739 "data_offset": 0, 00:37:29.739 "data_size": 7936 00:37:29.739 }, 00:37:29.739 { 00:37:29.739 "name": "BaseBdev2", 00:37:29.739 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:29.739 "is_configured": true, 00:37:29.739 "data_offset": 256, 00:37:29.739 "data_size": 7936 00:37:29.739 } 00:37:29.739 ] 00:37:29.739 }' 00:37:29.739 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:29.739 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.998 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:29.998 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:29.998 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:29.998 [2024-12-06 18:36:00.799004] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:29.998 [2024-12-06 18:36:00.819910] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:37:29.998 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:29.998 18:36:00 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:37:29.998 [2024-12-06 18:36:00.822681] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:30.937 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:30.937 "name": "raid_bdev1", 00:37:30.937 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:30.937 "strip_size_kb": 0, 00:37:30.937 "state": "online", 00:37:30.937 "raid_level": "raid1", 00:37:30.937 "superblock": true, 00:37:30.937 "num_base_bdevs": 2, 00:37:30.937 "num_base_bdevs_discovered": 2, 00:37:30.937 "num_base_bdevs_operational": 2, 00:37:30.937 "process": { 00:37:30.937 "type": "rebuild", 00:37:30.937 "target": "spare", 00:37:30.937 "progress": { 00:37:30.937 "blocks": 2560, 00:37:30.937 "percent": 32 00:37:30.937 } 00:37:30.937 }, 00:37:30.937 "base_bdevs_list": [ 00:37:30.938 { 00:37:30.938 "name": "spare", 00:37:30.938 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:30.938 "is_configured": true, 00:37:30.938 "data_offset": 256, 00:37:30.938 "data_size": 7936 00:37:30.938 }, 00:37:30.938 { 00:37:30.938 "name": "BaseBdev2", 00:37:30.938 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:30.938 "is_configured": true, 00:37:30.938 "data_offset": 256, 00:37:30.938 "data_size": 7936 00:37:30.938 } 00:37:30.938 ] 00:37:30.938 }' 00:37:30.938 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:31.197 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:31.197 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:31.197 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:31.197 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:31.197 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.197 18:36:01 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:31.197 [2024-12-06 18:36:01.975238] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:31.197 [2024-12-06 18:36:02.031914] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:31.197 [2024-12-06 18:36:02.032000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:31.197 [2024-12-06 18:36:02.032018] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:31.197 [2024-12-06 18:36:02.032036] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.197 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.198 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:31.198 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.198 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:31.198 "name": "raid_bdev1", 00:37:31.198 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:31.198 "strip_size_kb": 0, 00:37:31.198 "state": "online", 00:37:31.198 "raid_level": "raid1", 00:37:31.198 "superblock": true, 00:37:31.198 "num_base_bdevs": 2, 00:37:31.198 "num_base_bdevs_discovered": 1, 00:37:31.198 "num_base_bdevs_operational": 1, 00:37:31.198 "base_bdevs_list": [ 00:37:31.198 { 00:37:31.198 "name": null, 00:37:31.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:31.198 "is_configured": false, 00:37:31.198 "data_offset": 0, 00:37:31.198 "data_size": 7936 00:37:31.198 }, 00:37:31.198 { 00:37:31.198 "name": "BaseBdev2", 00:37:31.198 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:31.198 "is_configured": true, 00:37:31.198 "data_offset": 256, 00:37:31.198 "data_size": 7936 00:37:31.198 } 00:37:31.198 ] 00:37:31.198 }' 00:37:31.198 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:31.198 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:31.767 "name": "raid_bdev1", 00:37:31.767 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:31.767 "strip_size_kb": 0, 00:37:31.767 "state": "online", 00:37:31.767 "raid_level": "raid1", 00:37:31.767 "superblock": true, 00:37:31.767 "num_base_bdevs": 2, 00:37:31.767 "num_base_bdevs_discovered": 1, 00:37:31.767 "num_base_bdevs_operational": 1, 00:37:31.767 "base_bdevs_list": [ 00:37:31.767 { 00:37:31.767 "name": null, 00:37:31.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:31.767 "is_configured": false, 00:37:31.767 "data_offset": 0, 00:37:31.767 "data_size": 7936 00:37:31.767 }, 00:37:31.767 { 00:37:31.767 "name": "BaseBdev2", 00:37:31.767 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:31.767 "is_configured": true, 00:37:31.767 "data_offset": 256, 00:37:31.767 "data_size": 7936 00:37:31.767 } 00:37:31.767 ] 00:37:31.767 }' 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:31.767 [2024-12-06 18:36:02.589491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:31.767 [2024-12-06 18:36:02.607147] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:31.767 18:36:02 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:37:31.767 [2024-12-06 18:36:02.609555] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:32.706 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:32.966 "name": "raid_bdev1", 00:37:32.966 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:32.966 "strip_size_kb": 0, 00:37:32.966 "state": "online", 00:37:32.966 "raid_level": "raid1", 00:37:32.966 "superblock": true, 00:37:32.966 "num_base_bdevs": 2, 00:37:32.966 "num_base_bdevs_discovered": 2, 00:37:32.966 "num_base_bdevs_operational": 2, 00:37:32.966 "process": { 00:37:32.966 "type": "rebuild", 00:37:32.966 "target": "spare", 00:37:32.966 "progress": { 00:37:32.966 "blocks": 2560, 00:37:32.966 "percent": 32 00:37:32.966 } 00:37:32.966 }, 00:37:32.966 "base_bdevs_list": [ 00:37:32.966 { 00:37:32.966 "name": "spare", 00:37:32.966 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:32.966 "is_configured": true, 00:37:32.966 "data_offset": 256, 00:37:32.966 "data_size": 7936 00:37:32.966 }, 00:37:32.966 { 00:37:32.966 "name": "BaseBdev2", 00:37:32.966 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:32.966 "is_configured": true, 00:37:32.966 "data_offset": 256, 00:37:32.966 "data_size": 7936 00:37:32.966 } 00:37:32.966 ] 00:37:32.966 }' 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:37:32.966 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=740 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:32.966 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:32.966 "name": "raid_bdev1", 00:37:32.966 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:32.966 "strip_size_kb": 0, 00:37:32.966 "state": "online", 00:37:32.967 "raid_level": "raid1", 00:37:32.967 "superblock": true, 00:37:32.967 "num_base_bdevs": 2, 00:37:32.967 "num_base_bdevs_discovered": 2, 00:37:32.967 "num_base_bdevs_operational": 2, 00:37:32.967 "process": { 00:37:32.967 "type": "rebuild", 00:37:32.967 "target": "spare", 00:37:32.967 "progress": { 00:37:32.967 "blocks": 2816, 00:37:32.967 "percent": 35 00:37:32.967 } 00:37:32.967 }, 00:37:32.967 "base_bdevs_list": [ 00:37:32.967 { 00:37:32.967 "name": "spare", 00:37:32.967 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:32.967 "is_configured": true, 00:37:32.967 "data_offset": 256, 00:37:32.967 "data_size": 7936 00:37:32.967 }, 00:37:32.967 { 00:37:32.967 "name": "BaseBdev2", 00:37:32.967 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:32.967 "is_configured": true, 00:37:32.967 "data_offset": 256, 00:37:32.967 "data_size": 7936 00:37:32.967 } 00:37:32.967 ] 00:37:32.967 }' 00:37:32.967 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:32.967 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:32.967 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:32.967 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:32.967 18:36:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:34.346 "name": "raid_bdev1", 00:37:34.346 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:34.346 "strip_size_kb": 0, 00:37:34.346 "state": "online", 00:37:34.346 "raid_level": "raid1", 00:37:34.346 "superblock": true, 00:37:34.346 "num_base_bdevs": 2, 00:37:34.346 "num_base_bdevs_discovered": 2, 00:37:34.346 "num_base_bdevs_operational": 2, 00:37:34.346 "process": { 00:37:34.346 "type": "rebuild", 00:37:34.346 "target": "spare", 00:37:34.346 "progress": { 00:37:34.346 "blocks": 5632, 00:37:34.346 "percent": 70 00:37:34.346 } 00:37:34.346 }, 00:37:34.346 "base_bdevs_list": [ 00:37:34.346 { 00:37:34.346 "name": "spare", 00:37:34.346 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:34.346 "is_configured": true, 00:37:34.346 "data_offset": 256, 00:37:34.346 "data_size": 7936 00:37:34.346 }, 00:37:34.346 { 00:37:34.346 "name": "BaseBdev2", 00:37:34.346 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:34.346 "is_configured": true, 00:37:34.346 "data_offset": 256, 00:37:34.346 "data_size": 7936 00:37:34.346 } 00:37:34.346 ] 00:37:34.346 }' 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:34.346 18:36:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:34.346 18:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:34.346 18:36:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:37:34.913 [2024-12-06 18:36:05.732204] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:37:34.913 [2024-12-06 18:36:05.732465] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:37:34.913 [2024-12-06 18:36:05.732608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.171 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:35.171 "name": "raid_bdev1", 00:37:35.171 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:35.171 "strip_size_kb": 0, 00:37:35.171 "state": "online", 00:37:35.171 "raid_level": "raid1", 00:37:35.171 "superblock": true, 00:37:35.171 "num_base_bdevs": 2, 00:37:35.171 "num_base_bdevs_discovered": 2, 00:37:35.171 "num_base_bdevs_operational": 2, 00:37:35.171 "base_bdevs_list": [ 00:37:35.171 { 00:37:35.171 "name": "spare", 00:37:35.172 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:35.172 "is_configured": true, 00:37:35.172 "data_offset": 256, 00:37:35.172 "data_size": 7936 00:37:35.172 }, 00:37:35.172 { 00:37:35.172 "name": "BaseBdev2", 00:37:35.172 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:35.172 "is_configured": true, 00:37:35.172 "data_offset": 256, 00:37:35.172 "data_size": 7936 00:37:35.172 } 00:37:35.172 ] 00:37:35.172 }' 00:37:35.172 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:35.431 "name": "raid_bdev1", 00:37:35.431 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:35.431 "strip_size_kb": 0, 00:37:35.431 "state": "online", 00:37:35.431 "raid_level": "raid1", 00:37:35.431 "superblock": true, 00:37:35.431 "num_base_bdevs": 2, 00:37:35.431 "num_base_bdevs_discovered": 2, 00:37:35.431 "num_base_bdevs_operational": 2, 00:37:35.431 "base_bdevs_list": [ 00:37:35.431 { 00:37:35.431 "name": "spare", 00:37:35.431 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:35.431 "is_configured": true, 00:37:35.431 "data_offset": 256, 00:37:35.431 "data_size": 7936 00:37:35.431 }, 00:37:35.431 { 00:37:35.431 "name": "BaseBdev2", 00:37:35.431 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:35.431 "is_configured": true, 00:37:35.431 "data_offset": 256, 00:37:35.431 "data_size": 7936 00:37:35.431 } 00:37:35.431 ] 00:37:35.431 }' 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:35.431 "name": "raid_bdev1", 00:37:35.431 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:35.431 "strip_size_kb": 0, 00:37:35.431 "state": "online", 00:37:35.431 "raid_level": "raid1", 00:37:35.431 "superblock": true, 00:37:35.431 "num_base_bdevs": 2, 00:37:35.431 "num_base_bdevs_discovered": 2, 00:37:35.431 "num_base_bdevs_operational": 2, 00:37:35.431 "base_bdevs_list": [ 00:37:35.431 { 00:37:35.431 "name": "spare", 00:37:35.431 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:35.431 "is_configured": true, 00:37:35.431 "data_offset": 256, 00:37:35.431 "data_size": 7936 00:37:35.431 }, 00:37:35.431 { 00:37:35.431 "name": "BaseBdev2", 00:37:35.431 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:35.431 "is_configured": true, 00:37:35.431 "data_offset": 256, 00:37:35.431 "data_size": 7936 00:37:35.431 } 00:37:35.431 ] 00:37:35.431 }' 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:35.431 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.001 [2024-12-06 18:36:06.716838] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:37:36.001 [2024-12-06 18:36:06.717004] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:37:36.001 [2024-12-06 18:36:06.717128] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:36.001 [2024-12-06 18:36:06.717225] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:36.001 [2024-12-06 18:36:06.717238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.001 [2024-12-06 18:36:06.780731] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:36.001 [2024-12-06 18:36:06.780796] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:36.001 [2024-12-06 18:36:06.780840] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:37:36.001 [2024-12-06 18:36:06.780852] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:36.001 [2024-12-06 18:36:06.783480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:36.001 [2024-12-06 18:36:06.783521] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:36.001 [2024-12-06 18:36:06.783585] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:36.001 [2024-12-06 18:36:06.783638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:36.001 [2024-12-06 18:36:06.783753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:37:36.001 spare 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.001 [2024-12-06 18:36:06.883681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:37:36.001 [2024-12-06 18:36:06.883837] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:37:36.001 [2024-12-06 18:36:06.883948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:37:36.001 [2024-12-06 18:36:06.884035] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:37:36.001 [2024-12-06 18:36:06.884048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:37:36.001 [2024-12-06 18:36:06.884136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:36.001 "name": "raid_bdev1", 00:37:36.001 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:36.001 "strip_size_kb": 0, 00:37:36.001 "state": "online", 00:37:36.001 "raid_level": "raid1", 00:37:36.001 "superblock": true, 00:37:36.001 "num_base_bdevs": 2, 00:37:36.001 "num_base_bdevs_discovered": 2, 00:37:36.001 "num_base_bdevs_operational": 2, 00:37:36.001 "base_bdevs_list": [ 00:37:36.001 { 00:37:36.001 "name": "spare", 00:37:36.001 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:36.001 "is_configured": true, 00:37:36.001 "data_offset": 256, 00:37:36.001 "data_size": 7936 00:37:36.001 }, 00:37:36.001 { 00:37:36.001 "name": "BaseBdev2", 00:37:36.001 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:36.001 "is_configured": true, 00:37:36.001 "data_offset": 256, 00:37:36.001 "data_size": 7936 00:37:36.001 } 00:37:36.001 ] 00:37:36.001 }' 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:36.001 18:36:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.572 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:36.572 "name": "raid_bdev1", 00:37:36.572 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:36.572 "strip_size_kb": 0, 00:37:36.572 "state": "online", 00:37:36.572 "raid_level": "raid1", 00:37:36.572 "superblock": true, 00:37:36.572 "num_base_bdevs": 2, 00:37:36.572 "num_base_bdevs_discovered": 2, 00:37:36.572 "num_base_bdevs_operational": 2, 00:37:36.572 "base_bdevs_list": [ 00:37:36.572 { 00:37:36.572 "name": "spare", 00:37:36.572 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:36.573 "is_configured": true, 00:37:36.573 "data_offset": 256, 00:37:36.573 "data_size": 7936 00:37:36.573 }, 00:37:36.573 { 00:37:36.573 "name": "BaseBdev2", 00:37:36.573 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:36.573 "is_configured": true, 00:37:36.573 "data_offset": 256, 00:37:36.573 "data_size": 7936 00:37:36.573 } 00:37:36.573 ] 00:37:36.573 }' 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.573 [2024-12-06 18:36:07.483878] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:36.573 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:36.870 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:36.870 "name": "raid_bdev1", 00:37:36.870 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:36.870 "strip_size_kb": 0, 00:37:36.870 "state": "online", 00:37:36.870 "raid_level": "raid1", 00:37:36.870 "superblock": true, 00:37:36.870 "num_base_bdevs": 2, 00:37:36.870 "num_base_bdevs_discovered": 1, 00:37:36.870 "num_base_bdevs_operational": 1, 00:37:36.870 "base_bdevs_list": [ 00:37:36.870 { 00:37:36.870 "name": null, 00:37:36.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:36.870 "is_configured": false, 00:37:36.870 "data_offset": 0, 00:37:36.870 "data_size": 7936 00:37:36.870 }, 00:37:36.870 { 00:37:36.870 "name": "BaseBdev2", 00:37:36.870 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:36.870 "is_configured": true, 00:37:36.870 "data_offset": 256, 00:37:36.870 "data_size": 7936 00:37:36.870 } 00:37:36.870 ] 00:37:36.870 }' 00:37:36.870 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:36.870 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:37.167 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:37:37.167 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:37.167 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:37.167 [2024-12-06 18:36:07.923279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:37.167 [2024-12-06 18:36:07.923500] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:37.167 [2024-12-06 18:36:07.923522] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:37.167 [2024-12-06 18:36:07.923569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:37.167 [2024-12-06 18:36:07.941293] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:37:37.168 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:37.168 18:36:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:37:37.168 [2024-12-06 18:36:07.943763] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:38.104 "name": "raid_bdev1", 00:37:38.104 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:38.104 "strip_size_kb": 0, 00:37:38.104 "state": "online", 00:37:38.104 "raid_level": "raid1", 00:37:38.104 "superblock": true, 00:37:38.104 "num_base_bdevs": 2, 00:37:38.104 "num_base_bdevs_discovered": 2, 00:37:38.104 "num_base_bdevs_operational": 2, 00:37:38.104 "process": { 00:37:38.104 "type": "rebuild", 00:37:38.104 "target": "spare", 00:37:38.104 "progress": { 00:37:38.104 "blocks": 2560, 00:37:38.104 "percent": 32 00:37:38.104 } 00:37:38.104 }, 00:37:38.104 "base_bdevs_list": [ 00:37:38.104 { 00:37:38.104 "name": "spare", 00:37:38.104 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:38.104 "is_configured": true, 00:37:38.104 "data_offset": 256, 00:37:38.104 "data_size": 7936 00:37:38.104 }, 00:37:38.104 { 00:37:38.104 "name": "BaseBdev2", 00:37:38.104 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:38.104 "is_configured": true, 00:37:38.104 "data_offset": 256, 00:37:38.104 "data_size": 7936 00:37:38.104 } 00:37:38.104 ] 00:37:38.104 }' 00:37:38.104 18:36:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:38.104 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:38.104 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:38.363 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:38.363 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:37:38.363 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.363 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:38.363 [2024-12-06 18:36:09.079960] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:38.363 [2024-12-06 18:36:09.152618] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:38.363 [2024-12-06 18:36:09.152693] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:38.363 [2024-12-06 18:36:09.152710] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:38.363 [2024-12-06 18:36:09.152722] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:38.363 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:38.364 "name": "raid_bdev1", 00:37:38.364 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:38.364 "strip_size_kb": 0, 00:37:38.364 "state": "online", 00:37:38.364 "raid_level": "raid1", 00:37:38.364 "superblock": true, 00:37:38.364 "num_base_bdevs": 2, 00:37:38.364 "num_base_bdevs_discovered": 1, 00:37:38.364 "num_base_bdevs_operational": 1, 00:37:38.364 "base_bdevs_list": [ 00:37:38.364 { 00:37:38.364 "name": null, 00:37:38.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:38.364 "is_configured": false, 00:37:38.364 "data_offset": 0, 00:37:38.364 "data_size": 7936 00:37:38.364 }, 00:37:38.364 { 00:37:38.364 "name": "BaseBdev2", 00:37:38.364 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:38.364 "is_configured": true, 00:37:38.364 "data_offset": 256, 00:37:38.364 "data_size": 7936 00:37:38.364 } 00:37:38.364 ] 00:37:38.364 }' 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:38.364 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:38.932 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:37:38.932 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.932 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:38.932 [2024-12-06 18:36:09.612481] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:37:38.932 [2024-12-06 18:36:09.612558] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:38.932 [2024-12-06 18:36:09.612591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:37:38.932 [2024-12-06 18:36:09.612608] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:38.932 [2024-12-06 18:36:09.612829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:38.932 [2024-12-06 18:36:09.612849] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:37:38.932 [2024-12-06 18:36:09.612904] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:37:38.932 [2024-12-06 18:36:09.612920] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:37:38.932 [2024-12-06 18:36:09.612933] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:37:38.932 [2024-12-06 18:36:09.612960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:37:38.932 [2024-12-06 18:36:09.629923] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:37:38.932 spare 00:37:38.932 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.932 18:36:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:37:38.932 [2024-12-06 18:36:09.632521] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:39.923 "name": "raid_bdev1", 00:37:39.923 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:39.923 "strip_size_kb": 0, 00:37:39.923 "state": "online", 00:37:39.923 "raid_level": "raid1", 00:37:39.923 "superblock": true, 00:37:39.923 "num_base_bdevs": 2, 00:37:39.923 "num_base_bdevs_discovered": 2, 00:37:39.923 "num_base_bdevs_operational": 2, 00:37:39.923 "process": { 00:37:39.923 "type": "rebuild", 00:37:39.923 "target": "spare", 00:37:39.923 "progress": { 00:37:39.923 "blocks": 2560, 00:37:39.923 "percent": 32 00:37:39.923 } 00:37:39.923 }, 00:37:39.923 "base_bdevs_list": [ 00:37:39.923 { 00:37:39.923 "name": "spare", 00:37:39.923 "uuid": "057207ee-39c8-5705-8d35-65730a5cb946", 00:37:39.923 "is_configured": true, 00:37:39.923 "data_offset": 256, 00:37:39.923 "data_size": 7936 00:37:39.923 }, 00:37:39.923 { 00:37:39.923 "name": "BaseBdev2", 00:37:39.923 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:39.923 "is_configured": true, 00:37:39.923 "data_offset": 256, 00:37:39.923 "data_size": 7936 00:37:39.923 } 00:37:39.923 ] 00:37:39.923 }' 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:39.923 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:39.923 [2024-12-06 18:36:10.768089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:39.923 [2024-12-06 18:36:10.841574] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:37:39.923 [2024-12-06 18:36:10.841695] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:37:39.923 [2024-12-06 18:36:10.841719] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:37:39.923 [2024-12-06 18:36:10.841729] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.182 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:40.182 "name": "raid_bdev1", 00:37:40.182 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:40.182 "strip_size_kb": 0, 00:37:40.183 "state": "online", 00:37:40.183 "raid_level": "raid1", 00:37:40.183 "superblock": true, 00:37:40.183 "num_base_bdevs": 2, 00:37:40.183 "num_base_bdevs_discovered": 1, 00:37:40.183 "num_base_bdevs_operational": 1, 00:37:40.183 "base_bdevs_list": [ 00:37:40.183 { 00:37:40.183 "name": null, 00:37:40.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.183 "is_configured": false, 00:37:40.183 "data_offset": 0, 00:37:40.183 "data_size": 7936 00:37:40.183 }, 00:37:40.183 { 00:37:40.183 "name": "BaseBdev2", 00:37:40.183 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:40.183 "is_configured": true, 00:37:40.183 "data_offset": 256, 00:37:40.183 "data_size": 7936 00:37:40.183 } 00:37:40.183 ] 00:37:40.183 }' 00:37:40.183 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:40.183 18:36:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:40.441 "name": "raid_bdev1", 00:37:40.441 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:40.441 "strip_size_kb": 0, 00:37:40.441 "state": "online", 00:37:40.441 "raid_level": "raid1", 00:37:40.441 "superblock": true, 00:37:40.441 "num_base_bdevs": 2, 00:37:40.441 "num_base_bdevs_discovered": 1, 00:37:40.441 "num_base_bdevs_operational": 1, 00:37:40.441 "base_bdevs_list": [ 00:37:40.441 { 00:37:40.441 "name": null, 00:37:40.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:40.441 "is_configured": false, 00:37:40.441 "data_offset": 0, 00:37:40.441 "data_size": 7936 00:37:40.441 }, 00:37:40.441 { 00:37:40.441 "name": "BaseBdev2", 00:37:40.441 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:40.441 "is_configured": true, 00:37:40.441 "data_offset": 256, 00:37:40.441 "data_size": 7936 00:37:40.441 } 00:37:40.441 ] 00:37:40.441 }' 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:40.441 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:40.698 [2024-12-06 18:36:11.435389] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:37:40.698 [2024-12-06 18:36:11.435459] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:37:40.698 [2024-12-06 18:36:11.435488] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:37:40.698 [2024-12-06 18:36:11.435501] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:37:40.698 [2024-12-06 18:36:11.435725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:37:40.698 [2024-12-06 18:36:11.435741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:37:40.698 [2024-12-06 18:36:11.435798] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:37:40.698 [2024-12-06 18:36:11.435813] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:40.698 [2024-12-06 18:36:11.435827] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:40.698 [2024-12-06 18:36:11.435841] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:37:40.698 BaseBdev1 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:40.698 18:36:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:41.631 "name": "raid_bdev1", 00:37:41.631 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:41.631 "strip_size_kb": 0, 00:37:41.631 "state": "online", 00:37:41.631 "raid_level": "raid1", 00:37:41.631 "superblock": true, 00:37:41.631 "num_base_bdevs": 2, 00:37:41.631 "num_base_bdevs_discovered": 1, 00:37:41.631 "num_base_bdevs_operational": 1, 00:37:41.631 "base_bdevs_list": [ 00:37:41.631 { 00:37:41.631 "name": null, 00:37:41.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:41.631 "is_configured": false, 00:37:41.631 "data_offset": 0, 00:37:41.631 "data_size": 7936 00:37:41.631 }, 00:37:41.631 { 00:37:41.631 "name": "BaseBdev2", 00:37:41.631 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:41.631 "is_configured": true, 00:37:41.631 "data_offset": 256, 00:37:41.631 "data_size": 7936 00:37:41.631 } 00:37:41.631 ] 00:37:41.631 }' 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:41.631 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:42.199 "name": "raid_bdev1", 00:37:42.199 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:42.199 "strip_size_kb": 0, 00:37:42.199 "state": "online", 00:37:42.199 "raid_level": "raid1", 00:37:42.199 "superblock": true, 00:37:42.199 "num_base_bdevs": 2, 00:37:42.199 "num_base_bdevs_discovered": 1, 00:37:42.199 "num_base_bdevs_operational": 1, 00:37:42.199 "base_bdevs_list": [ 00:37:42.199 { 00:37:42.199 "name": null, 00:37:42.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:42.199 "is_configured": false, 00:37:42.199 "data_offset": 0, 00:37:42.199 "data_size": 7936 00:37:42.199 }, 00:37:42.199 { 00:37:42.199 "name": "BaseBdev2", 00:37:42.199 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:42.199 "is_configured": true, 00:37:42.199 "data_offset": 256, 00:37:42.199 "data_size": 7936 00:37:42.199 } 00:37:42.199 ] 00:37:42.199 }' 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:42.199 18:36:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:42.199 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:42.199 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:42.199 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:37:42.199 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:42.199 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:37:42.199 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:42.199 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:37:42.199 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:42.200 [2024-12-06 18:36:13.042840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:37:42.200 [2024-12-06 18:36:13.043037] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:37:42.200 [2024-12-06 18:36:13.043062] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:37:42.200 request: 00:37:42.200 { 00:37:42.200 "base_bdev": "BaseBdev1", 00:37:42.200 "raid_bdev": "raid_bdev1", 00:37:42.200 "method": "bdev_raid_add_base_bdev", 00:37:42.200 "req_id": 1 00:37:42.200 } 00:37:42.200 Got JSON-RPC error response 00:37:42.200 response: 00:37:42.200 { 00:37:42.200 "code": -22, 00:37:42.200 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:37:42.200 } 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:42.200 18:36:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.137 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:43.396 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.396 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:37:43.396 "name": "raid_bdev1", 00:37:43.396 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:43.396 "strip_size_kb": 0, 00:37:43.396 "state": "online", 00:37:43.396 "raid_level": "raid1", 00:37:43.396 "superblock": true, 00:37:43.396 "num_base_bdevs": 2, 00:37:43.396 "num_base_bdevs_discovered": 1, 00:37:43.396 "num_base_bdevs_operational": 1, 00:37:43.396 "base_bdevs_list": [ 00:37:43.396 { 00:37:43.396 "name": null, 00:37:43.396 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.396 "is_configured": false, 00:37:43.396 "data_offset": 0, 00:37:43.396 "data_size": 7936 00:37:43.396 }, 00:37:43.396 { 00:37:43.396 "name": "BaseBdev2", 00:37:43.396 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:43.396 "is_configured": true, 00:37:43.396 "data_offset": 256, 00:37:43.396 "data_size": 7936 00:37:43.396 } 00:37:43.396 ] 00:37:43.396 }' 00:37:43.396 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:37:43.396 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.655 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:37:43.655 "name": "raid_bdev1", 00:37:43.655 "uuid": "79e29238-0dca-4898-8fac-6f46d6e4cc85", 00:37:43.655 "strip_size_kb": 0, 00:37:43.655 "state": "online", 00:37:43.655 "raid_level": "raid1", 00:37:43.655 "superblock": true, 00:37:43.655 "num_base_bdevs": 2, 00:37:43.655 "num_base_bdevs_discovered": 1, 00:37:43.655 "num_base_bdevs_operational": 1, 00:37:43.655 "base_bdevs_list": [ 00:37:43.655 { 00:37:43.655 "name": null, 00:37:43.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:37:43.655 "is_configured": false, 00:37:43.655 "data_offset": 0, 00:37:43.655 "data_size": 7936 00:37:43.655 }, 00:37:43.655 { 00:37:43.655 "name": "BaseBdev2", 00:37:43.655 "uuid": "09416c92-151d-5081-ba5a-8935430db804", 00:37:43.655 "is_configured": true, 00:37:43.655 "data_offset": 256, 00:37:43.655 "data_size": 7936 00:37:43.655 } 00:37:43.655 ] 00:37:43.655 }' 00:37:43.656 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:37:43.656 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:37:43.656 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88749 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88749 ']' 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88749 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88749 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:43.915 killing process with pid 88749 00:37:43.915 Received shutdown signal, test time was about 60.000000 seconds 00:37:43.915 00:37:43.915 Latency(us) 00:37:43.915 [2024-12-06T18:36:14.864Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:43.915 [2024-12-06T18:36:14.864Z] =================================================================================================================== 00:37:43.915 [2024-12-06T18:36:14.864Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88749' 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88749 00:37:43.915 [2024-12-06 18:36:14.663958] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:37:43.915 18:36:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88749 00:37:43.915 [2024-12-06 18:36:14.664111] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:37:43.915 [2024-12-06 18:36:14.664183] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:37:43.915 [2024-12-06 18:36:14.664200] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:37:44.174 [2024-12-06 18:36:14.975747] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:37:45.549 18:36:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:37:45.549 00:37:45.549 real 0m17.399s 00:37:45.549 user 0m22.442s 00:37:45.549 sys 0m1.920s 00:37:45.549 18:36:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:45.549 ************************************ 00:37:45.549 END TEST raid_rebuild_test_sb_md_interleaved 00:37:45.549 ************************************ 00:37:45.549 18:36:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:37:45.549 18:36:16 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:37:45.549 18:36:16 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:37:45.549 18:36:16 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88749 ']' 00:37:45.549 18:36:16 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88749 00:37:45.549 18:36:16 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:37:45.549 00:37:45.549 real 12m3.120s 00:37:45.549 user 15m59.650s 00:37:45.550 sys 2m13.129s 00:37:45.550 18:36:16 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:45.550 ************************************ 00:37:45.550 END TEST bdev_raid 00:37:45.550 ************************************ 00:37:45.550 18:36:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:37:45.550 18:36:16 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:37:45.550 18:36:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:45.550 18:36:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:45.550 18:36:16 -- common/autotest_common.sh@10 -- # set +x 00:37:45.550 ************************************ 00:37:45.550 START TEST spdkcli_raid 00:37:45.550 ************************************ 00:37:45.550 18:36:16 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:37:45.550 * Looking for test storage... 00:37:45.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:37:45.550 18:36:16 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:45.550 18:36:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:37:45.550 18:36:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:45.809 18:36:16 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:45.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:45.809 --rc genhtml_branch_coverage=1 00:37:45.809 --rc genhtml_function_coverage=1 00:37:45.809 --rc genhtml_legend=1 00:37:45.809 --rc geninfo_all_blocks=1 00:37:45.809 --rc geninfo_unexecuted_blocks=1 00:37:45.809 00:37:45.809 ' 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:45.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:45.809 --rc genhtml_branch_coverage=1 00:37:45.809 --rc genhtml_function_coverage=1 00:37:45.809 --rc genhtml_legend=1 00:37:45.809 --rc geninfo_all_blocks=1 00:37:45.809 --rc geninfo_unexecuted_blocks=1 00:37:45.809 00:37:45.809 ' 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:45.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:45.809 --rc genhtml_branch_coverage=1 00:37:45.809 --rc genhtml_function_coverage=1 00:37:45.809 --rc genhtml_legend=1 00:37:45.809 --rc geninfo_all_blocks=1 00:37:45.809 --rc geninfo_unexecuted_blocks=1 00:37:45.809 00:37:45.809 ' 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:45.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:45.809 --rc genhtml_branch_coverage=1 00:37:45.809 --rc genhtml_function_coverage=1 00:37:45.809 --rc genhtml_legend=1 00:37:45.809 --rc geninfo_all_blocks=1 00:37:45.809 --rc geninfo_unexecuted_blocks=1 00:37:45.809 00:37:45.809 ' 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:37:45.809 18:36:16 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=89425 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:37:45.809 18:36:16 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 89425 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 89425 ']' 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:45.809 18:36:16 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:45.809 [2024-12-06 18:36:16.707474] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:37:45.809 [2024-12-06 18:36:16.707737] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89425 ] 00:37:46.069 [2024-12-06 18:36:16.895241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:46.328 [2024-12-06 18:36:17.025307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:46.328 [2024-12-06 18:36:17.025344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.266 18:36:18 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:47.266 18:36:18 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:37:47.266 18:36:18 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:37:47.266 18:36:18 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:47.266 18:36:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:47.266 18:36:18 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:37:47.266 18:36:18 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:47.266 18:36:18 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:47.266 18:36:18 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:37:47.266 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:37:47.266 ' 00:37:49.172 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:37:49.172 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:37:49.172 18:36:19 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:37:49.172 18:36:19 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:49.172 18:36:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:49.172 18:36:19 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:37:49.172 18:36:19 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:49.172 18:36:19 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:49.172 18:36:19 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:37:49.172 ' 00:37:50.110 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:37:50.110 18:36:20 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:37:50.110 18:36:20 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:50.110 18:36:20 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:50.110 18:36:21 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:37:50.110 18:36:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:50.110 18:36:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:50.110 18:36:21 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:37:50.110 18:36:21 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:37:50.679 18:36:21 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:37:50.679 18:36:21 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:37:50.679 18:36:21 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:37:50.679 18:36:21 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:50.679 18:36:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:50.939 18:36:21 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:37:50.939 18:36:21 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:50.939 18:36:21 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:50.939 18:36:21 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:37:50.939 ' 00:37:51.878 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:37:51.878 18:36:22 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:37:51.878 18:36:22 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:51.878 18:36:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:51.878 18:36:22 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:37:51.878 18:36:22 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:51.878 18:36:22 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:51.878 18:36:22 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:37:51.878 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:37:51.878 ' 00:37:53.257 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:37:53.257 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:37:53.517 18:36:24 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:53.517 18:36:24 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 89425 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89425 ']' 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89425 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89425 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:53.517 killing process with pid 89425 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89425' 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 89425 00:37:53.517 18:36:24 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 89425 00:37:56.052 Process with pid 89425 is not found 00:37:56.052 18:36:26 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:37:56.052 18:36:26 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 89425 ']' 00:37:56.052 18:36:26 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 89425 00:37:56.052 18:36:26 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 89425 ']' 00:37:56.052 18:36:26 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 89425 00:37:56.052 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (89425) - No such process 00:37:56.053 18:36:26 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 89425 is not found' 00:37:56.053 18:36:26 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:37:56.053 18:36:26 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:37:56.053 18:36:26 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:37:56.053 18:36:26 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:37:56.053 ************************************ 00:37:56.053 END TEST spdkcli_raid 00:37:56.053 ************************************ 00:37:56.053 00:37:56.053 real 0m10.614s 00:37:56.053 user 0m21.426s 00:37:56.053 sys 0m1.439s 00:37:56.053 18:36:26 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.053 18:36:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:37:56.340 18:36:27 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:37:56.340 18:36:27 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:56.340 18:36:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.340 18:36:27 -- common/autotest_common.sh@10 -- # set +x 00:37:56.340 ************************************ 00:37:56.340 START TEST blockdev_raid5f 00:37:56.340 ************************************ 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:37:56.340 * Looking for test storage... 00:37:56.340 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.340 18:36:27 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:56.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.340 --rc genhtml_branch_coverage=1 00:37:56.340 --rc genhtml_function_coverage=1 00:37:56.340 --rc genhtml_legend=1 00:37:56.340 --rc geninfo_all_blocks=1 00:37:56.340 --rc geninfo_unexecuted_blocks=1 00:37:56.340 00:37:56.340 ' 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:56.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.340 --rc genhtml_branch_coverage=1 00:37:56.340 --rc genhtml_function_coverage=1 00:37:56.340 --rc genhtml_legend=1 00:37:56.340 --rc geninfo_all_blocks=1 00:37:56.340 --rc geninfo_unexecuted_blocks=1 00:37:56.340 00:37:56.340 ' 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:56.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.340 --rc genhtml_branch_coverage=1 00:37:56.340 --rc genhtml_function_coverage=1 00:37:56.340 --rc genhtml_legend=1 00:37:56.340 --rc geninfo_all_blocks=1 00:37:56.340 --rc geninfo_unexecuted_blocks=1 00:37:56.340 00:37:56.340 ' 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:56.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.340 --rc genhtml_branch_coverage=1 00:37:56.340 --rc genhtml_function_coverage=1 00:37:56.340 --rc genhtml_legend=1 00:37:56.340 --rc geninfo_all_blocks=1 00:37:56.340 --rc geninfo_unexecuted_blocks=1 00:37:56.340 00:37:56.340 ' 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=89712 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:37:56.340 18:36:27 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 89712 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 89712 ']' 00:37:56.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.340 18:36:27 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:56.600 [2024-12-06 18:36:27.397796] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:37:56.600 [2024-12-06 18:36:27.398188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89712 ] 00:37:56.859 [2024-12-06 18:36:27.583262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:56.859 [2024-12-06 18:36:27.712084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:57.799 18:36:28 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:57.799 18:36:28 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:37:57.799 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:37:57.799 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:37:57.799 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:37:57.799 18:36:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:57.799 18:36:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:57.799 Malloc0 00:37:58.059 Malloc1 00:37:58.059 Malloc2 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:37:58.059 18:36:28 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:58.059 18:36:28 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:37:58.059 18:36:29 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2615b6bf-0e83-4236-8237-c473c919f441"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2615b6bf-0e83-4236-8237-c473c919f441",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2615b6bf-0e83-4236-8237-c473c919f441",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c1760f10-170a-4f8e-88ac-6ffe50152234",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e9c3212c-2824-4cb7-b49a-f70d8ded76bb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "de1ba957-0850-4520-af66-26a30fa3f93a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:37:58.059 18:36:29 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:37:58.319 18:36:29 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:37:58.319 18:36:29 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:37:58.319 18:36:29 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:37:58.319 18:36:29 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 89712 00:37:58.319 18:36:29 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 89712 ']' 00:37:58.319 18:36:29 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 89712 00:37:58.319 18:36:29 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:37:58.319 18:36:29 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:58.319 18:36:29 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89712 00:37:58.320 18:36:29 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:58.320 18:36:29 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:58.320 18:36:29 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89712' 00:37:58.320 killing process with pid 89712 00:37:58.320 18:36:29 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 89712 00:37:58.320 18:36:29 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 89712 00:38:01.611 18:36:31 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:01.611 18:36:31 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:38:01.611 18:36:31 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:38:01.611 18:36:31 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:01.611 18:36:31 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:01.611 ************************************ 00:38:01.611 START TEST bdev_hello_world 00:38:01.611 ************************************ 00:38:01.611 18:36:31 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:38:01.611 [2024-12-06 18:36:32.044193] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:01.611 [2024-12-06 18:36:32.044324] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89779 ] 00:38:01.611 [2024-12-06 18:36:32.229386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.611 [2024-12-06 18:36:32.353918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.180 [2024-12-06 18:36:32.944674] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:38:02.180 [2024-12-06 18:36:32.944731] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:38:02.180 [2024-12-06 18:36:32.944776] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:38:02.180 [2024-12-06 18:36:32.945290] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:38:02.180 [2024-12-06 18:36:32.945446] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:38:02.180 [2024-12-06 18:36:32.945465] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:38:02.180 [2024-12-06 18:36:32.945535] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:38:02.180 00:38:02.180 [2024-12-06 18:36:32.945562] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:38:03.558 00:38:03.558 real 0m2.506s 00:38:03.558 user 0m2.002s 00:38:03.558 sys 0m0.380s 00:38:03.558 18:36:34 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:03.558 18:36:34 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:38:03.558 ************************************ 00:38:03.558 END TEST bdev_hello_world 00:38:03.558 ************************************ 00:38:03.817 18:36:34 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:38:03.817 18:36:34 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:03.817 18:36:34 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:03.817 18:36:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:03.817 ************************************ 00:38:03.817 START TEST bdev_bounds 00:38:03.817 ************************************ 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=89827 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 89827' 00:38:03.817 Process bdevio pid: 89827 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 89827 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 89827 ']' 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:03.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:03.817 18:36:34 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:38:03.817 [2024-12-06 18:36:34.628862] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:03.817 [2024-12-06 18:36:34.628983] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89827 ] 00:38:04.076 [2024-12-06 18:36:34.808521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:04.076 [2024-12-06 18:36:34.938110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:04.076 [2024-12-06 18:36:34.938259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.076 [2024-12-06 18:36:34.938307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:04.642 18:36:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:04.642 18:36:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:38:04.642 18:36:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:38:04.901 I/O targets: 00:38:04.901 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:38:04.901 00:38:04.901 00:38:04.901 CUnit - A unit testing framework for C - Version 2.1-3 00:38:04.901 http://cunit.sourceforge.net/ 00:38:04.901 00:38:04.901 00:38:04.901 Suite: bdevio tests on: raid5f 00:38:04.901 Test: blockdev write read block ...passed 00:38:04.901 Test: blockdev write zeroes read block ...passed 00:38:04.901 Test: blockdev write zeroes read no split ...passed 00:38:04.901 Test: blockdev write zeroes read split ...passed 00:38:05.159 Test: blockdev write zeroes read split partial ...passed 00:38:05.159 Test: blockdev reset ...passed 00:38:05.159 Test: blockdev write read 8 blocks ...passed 00:38:05.159 Test: blockdev write read size > 128k ...passed 00:38:05.159 Test: blockdev write read invalid size ...passed 00:38:05.159 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:38:05.159 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:38:05.159 Test: blockdev write read max offset ...passed 00:38:05.159 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:38:05.159 Test: blockdev writev readv 8 blocks ...passed 00:38:05.159 Test: blockdev writev readv 30 x 1block ...passed 00:38:05.159 Test: blockdev writev readv block ...passed 00:38:05.159 Test: blockdev writev readv size > 128k ...passed 00:38:05.159 Test: blockdev writev readv size > 128k in two iovs ...passed 00:38:05.159 Test: blockdev comparev and writev ...passed 00:38:05.159 Test: blockdev nvme passthru rw ...passed 00:38:05.159 Test: blockdev nvme passthru vendor specific ...passed 00:38:05.159 Test: blockdev nvme admin passthru ...passed 00:38:05.159 Test: blockdev copy ...passed 00:38:05.159 00:38:05.159 Run Summary: Type Total Ran Passed Failed Inactive 00:38:05.159 suites 1 1 n/a 0 0 00:38:05.159 tests 23 23 23 0 0 00:38:05.159 asserts 130 130 130 0 n/a 00:38:05.159 00:38:05.159 Elapsed time = 0.598 seconds 00:38:05.159 0 00:38:05.159 18:36:35 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 89827 00:38:05.159 18:36:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 89827 ']' 00:38:05.159 18:36:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 89827 00:38:05.159 18:36:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:38:05.159 18:36:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.159 18:36:35 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89827 00:38:05.159 18:36:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:05.159 18:36:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:05.159 18:36:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89827' 00:38:05.159 killing process with pid 89827 00:38:05.159 18:36:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 89827 00:38:05.159 18:36:36 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 89827 00:38:07.062 18:36:37 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:38:07.062 00:38:07.062 real 0m3.034s 00:38:07.062 user 0m7.425s 00:38:07.062 sys 0m0.543s 00:38:07.062 18:36:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:07.062 18:36:37 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:38:07.062 ************************************ 00:38:07.062 END TEST bdev_bounds 00:38:07.062 ************************************ 00:38:07.062 18:36:37 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:38:07.062 18:36:37 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:38:07.062 18:36:37 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:07.062 18:36:37 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:07.062 ************************************ 00:38:07.062 START TEST bdev_nbd 00:38:07.062 ************************************ 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89892 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89892 /var/tmp/spdk-nbd.sock 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89892 ']' 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:38:07.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:07.062 18:36:37 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:38:07.062 [2024-12-06 18:36:37.750926] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:07.062 [2024-12-06 18:36:37.751583] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:07.062 [2024-12-06 18:36:37.938115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:07.320 [2024-12-06 18:36:38.073131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:38:07.888 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:08.147 1+0 records in 00:38:08.147 1+0 records out 00:38:08.147 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000451468 s, 9.1 MB/s 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:38:08.147 18:36:38 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:38:08.406 { 00:38:08.406 "nbd_device": "/dev/nbd0", 00:38:08.406 "bdev_name": "raid5f" 00:38:08.406 } 00:38:08.406 ]' 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:38:08.406 { 00:38:08.406 "nbd_device": "/dev/nbd0", 00:38:08.406 "bdev_name": "raid5f" 00:38:08.406 } 00:38:08.406 ]' 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:08.406 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:08.407 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:08.407 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:08.666 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:08.925 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:38:09.185 /dev/nbd0 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:38:09.185 1+0 records in 00:38:09.185 1+0 records out 00:38:09.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234419 s, 17.5 MB/s 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:38:09.185 18:36:39 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:09.185 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:09.185 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:09.444 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:38:09.444 { 00:38:09.444 "nbd_device": "/dev/nbd0", 00:38:09.444 "bdev_name": "raid5f" 00:38:09.444 } 00:38:09.444 ]' 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:38:09.445 { 00:38:09.445 "nbd_device": "/dev/nbd0", 00:38:09.445 "bdev_name": "raid5f" 00:38:09.445 } 00:38:09.445 ]' 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:38:09.445 256+0 records in 00:38:09.445 256+0 records out 00:38:09.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0125447 s, 83.6 MB/s 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:38:09.445 256+0 records in 00:38:09.445 256+0 records out 00:38:09.445 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0352171 s, 29.8 MB/s 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:09.445 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:09.704 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:38:09.964 18:36:40 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:38:10.223 malloc_lvol_verify 00:38:10.223 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:38:10.482 1acb2877-91c8-4066-a453-724d733587d9 00:38:10.482 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:38:10.740 f3ec720a-c2fe-4c6d-9355-43a3efb5287d 00:38:10.740 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:38:10.740 /dev/nbd0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:38:10.999 mke2fs 1.47.0 (5-Feb-2023) 00:38:10.999 Discarding device blocks: 0/4096 done 00:38:10.999 Creating filesystem with 4096 1k blocks and 1024 inodes 00:38:10.999 00:38:10.999 Allocating group tables: 0/1 done 00:38:10.999 Writing inode tables: 0/1 done 00:38:10.999 Creating journal (1024 blocks): done 00:38:10.999 Writing superblocks and filesystem accounting information: 0/1 done 00:38:10.999 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89892 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89892 ']' 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89892 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:10.999 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89892 00:38:11.258 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:11.258 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:11.258 killing process with pid 89892 00:38:11.258 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89892' 00:38:11.258 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89892 00:38:11.258 18:36:41 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89892 00:38:12.638 18:36:43 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:38:12.638 00:38:12.638 real 0m5.921s 00:38:12.638 user 0m7.632s 00:38:12.638 sys 0m1.589s 00:38:12.638 18:36:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:12.638 ************************************ 00:38:12.638 18:36:43 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:38:12.638 END TEST bdev_nbd 00:38:12.638 ************************************ 00:38:12.940 18:36:43 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:38:12.940 18:36:43 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:38:12.940 18:36:43 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:38:12.940 18:36:43 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:38:12.940 18:36:43 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:12.940 18:36:43 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:12.940 18:36:43 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:12.940 ************************************ 00:38:12.940 START TEST bdev_fio 00:38:12.940 ************************************ 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:38:12.940 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:38:12.940 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:38:12.941 ************************************ 00:38:12.941 START TEST bdev_fio_rw_verify 00:38:12.941 ************************************ 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:12.941 18:36:43 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:38:13.219 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:38:13.219 fio-3.35 00:38:13.219 Starting 1 thread 00:38:25.425 00:38:25.425 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90093: Fri Dec 6 18:36:55 2024 00:38:25.425 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(450MiB/10001msec) 00:38:25.425 slat (usec): min=19, max=126, avg=21.05, stdev= 2.39 00:38:25.425 clat (usec): min=10, max=417, avg=138.72, stdev=49.75 00:38:25.425 lat (usec): min=30, max=455, avg=159.77, stdev=49.92 00:38:25.425 clat percentiles (usec): 00:38:25.425 | 50.000th=[ 143], 99.000th=[ 231], 99.900th=[ 255], 99.990th=[ 297], 00:38:25.425 | 99.999th=[ 388] 00:38:25.425 write: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(467MiB/9865msec); 0 zone resets 00:38:25.425 slat (usec): min=7, max=316, avg=17.53, stdev= 3.68 00:38:25.425 clat (usec): min=60, max=1205, avg=315.74, stdev=40.70 00:38:25.425 lat (usec): min=76, max=1439, avg=333.27, stdev=41.42 00:38:25.425 clat percentiles (usec): 00:38:25.425 | 50.000th=[ 322], 99.000th=[ 392], 99.900th=[ 537], 99.990th=[ 865], 00:38:25.425 | 99.999th=[ 1139] 00:38:25.425 bw ( KiB/s): min=44976, max=50696, per=98.91%, avg=47968.42, stdev=1606.72, samples=19 00:38:25.425 iops : min=11244, max=12674, avg=11992.11, stdev=401.68, samples=19 00:38:25.425 lat (usec) : 20=0.01%, 50=0.01%, 100=12.09%, 250=40.01%, 500=47.84% 00:38:25.425 lat (usec) : 750=0.05%, 1000=0.01% 00:38:25.426 lat (msec) : 2=0.01% 00:38:25.426 cpu : usr=98.95%, sys=0.41%, ctx=61, majf=0, minf=9549 00:38:25.426 IO depths : 1=7.7%, 2=20.0%, 4=55.1%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:38:25.426 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.426 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:25.426 issued rwts: total=115211,119607,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:25.426 latency : target=0, window=0, percentile=100.00%, depth=8 00:38:25.426 00:38:25.426 Run status group 0 (all jobs): 00:38:25.426 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=450MiB (472MB), run=10001-10001msec 00:38:25.426 WRITE: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=467MiB (490MB), run=9865-9865msec 00:38:25.997 ----------------------------------------------------- 00:38:25.997 Suppressions used: 00:38:25.997 count bytes template 00:38:25.997 1 7 /usr/src/fio/parse.c 00:38:25.997 816 78336 /usr/src/fio/iolog.c 00:38:25.997 1 8 libtcmalloc_minimal.so 00:38:25.997 1 904 libcrypto.so 00:38:25.997 ----------------------------------------------------- 00:38:25.997 00:38:25.997 00:38:25.997 real 0m13.056s 00:38:25.997 user 0m13.141s 00:38:25.997 sys 0m0.770s 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:38:25.997 ************************************ 00:38:25.997 END TEST bdev_fio_rw_verify 00:38:25.997 ************************************ 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2615b6bf-0e83-4236-8237-c473c919f441"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2615b6bf-0e83-4236-8237-c473c919f441",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2615b6bf-0e83-4236-8237-c473c919f441",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "c1760f10-170a-4f8e-88ac-6ffe50152234",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "e9c3212c-2824-4cb7-b49a-f70d8ded76bb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "de1ba957-0850-4520-af66-26a30fa3f93a",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:38:25.997 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:38:26.257 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:38:26.257 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:38:26.257 /home/vagrant/spdk_repo/spdk 00:38:26.257 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:38:26.257 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:38:26.257 18:36:56 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:38:26.257 00:38:26.257 real 0m13.349s 00:38:26.257 user 0m13.255s 00:38:26.257 sys 0m0.906s 00:38:26.257 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.257 18:36:56 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:38:26.257 ************************************ 00:38:26.257 END TEST bdev_fio 00:38:26.257 ************************************ 00:38:26.257 18:36:57 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:38:26.257 18:36:57 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:38:26.257 18:36:57 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:38:26.257 18:36:57 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.257 18:36:57 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:26.257 ************************************ 00:38:26.257 START TEST bdev_verify 00:38:26.257 ************************************ 00:38:26.257 18:36:57 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:38:26.257 [2024-12-06 18:36:57.158444] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:26.257 [2024-12-06 18:36:57.158570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90257 ] 00:38:26.517 [2024-12-06 18:36:57.345710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:26.776 [2024-12-06 18:36:57.481251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:26.776 [2024-12-06 18:36:57.481286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:27.346 Running I/O for 5 seconds... 00:38:29.218 15695.00 IOPS, 61.31 MiB/s [2024-12-06T18:37:01.545Z] 15644.50 IOPS, 61.11 MiB/s [2024-12-06T18:37:02.114Z] 15590.00 IOPS, 60.90 MiB/s [2024-12-06T18:37:03.493Z] 15451.75 IOPS, 60.36 MiB/s [2024-12-06T18:37:03.493Z] 15593.20 IOPS, 60.91 MiB/s 00:38:32.544 Latency(us) 00:38:32.544 [2024-12-06T18:37:03.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.544 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:32.544 Verification LBA range: start 0x0 length 0x2000 00:38:32.544 raid5f : 5.02 7832.65 30.60 0.00 0.00 24577.24 254.97 20108.23 00:38:32.544 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:38:32.544 Verification LBA range: start 0x2000 length 0x2000 00:38:32.544 raid5f : 5.02 7742.19 30.24 0.00 0.00 24893.61 183.42 20213.51 00:38:32.544 [2024-12-06T18:37:03.493Z] =================================================================================================================== 00:38:32.544 [2024-12-06T18:37:03.493Z] Total : 15574.84 60.84 0.00 0.00 24734.51 183.42 20213.51 00:38:33.923 00:38:33.923 real 0m7.606s 00:38:33.923 user 0m13.911s 00:38:33.923 sys 0m0.405s 00:38:33.923 ************************************ 00:38:33.923 END TEST bdev_verify 00:38:33.923 ************************************ 00:38:33.923 18:37:04 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:33.923 18:37:04 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:38:33.923 18:37:04 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:38:33.923 18:37:04 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:38:33.923 18:37:04 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:33.923 18:37:04 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:33.923 ************************************ 00:38:33.923 START TEST bdev_verify_big_io 00:38:33.923 ************************************ 00:38:33.923 18:37:04 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:38:33.923 [2024-12-06 18:37:04.834558] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:33.923 [2024-12-06 18:37:04.834697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90355 ] 00:38:34.181 [2024-12-06 18:37:05.019872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:34.470 [2024-12-06 18:37:05.160400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.470 [2024-12-06 18:37:05.160431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:35.036 Running I/O for 5 seconds... 00:38:36.984 756.00 IOPS, 47.25 MiB/s [2024-12-06T18:37:09.331Z] 761.00 IOPS, 47.56 MiB/s [2024-12-06T18:37:09.897Z] 802.67 IOPS, 50.17 MiB/s [2024-12-06T18:37:11.291Z] 825.00 IOPS, 51.56 MiB/s [2024-12-06T18:37:11.291Z] 862.40 IOPS, 53.90 MiB/s 00:38:40.342 Latency(us) 00:38:40.342 [2024-12-06T18:37:11.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.342 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:38:40.342 Verification LBA range: start 0x0 length 0x200 00:38:40.342 raid5f : 5.30 430.75 26.92 0.00 0.00 7331226.65 167.79 314993.91 00:38:40.342 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:38:40.342 Verification LBA range: start 0x200 length 0x200 00:38:40.342 raid5f : 5.23 424.53 26.53 0.00 0.00 7435732.59 213.85 314993.91 00:38:40.342 [2024-12-06T18:37:11.291Z] =================================================================================================================== 00:38:40.342 [2024-12-06T18:37:11.291Z] Total : 855.28 53.45 0.00 0.00 7382748.89 167.79 314993.91 00:38:41.719 00:38:41.719 real 0m7.929s 00:38:41.719 user 0m14.567s 00:38:41.719 sys 0m0.392s 00:38:41.719 18:37:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:41.719 18:37:12 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:38:41.719 ************************************ 00:38:41.719 END TEST bdev_verify_big_io 00:38:41.719 ************************************ 00:38:41.977 18:37:12 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:41.977 18:37:12 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:38:41.977 18:37:12 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:41.977 18:37:12 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:41.977 ************************************ 00:38:41.977 START TEST bdev_write_zeroes 00:38:41.977 ************************************ 00:38:41.977 18:37:12 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:41.977 [2024-12-06 18:37:12.843357] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:41.977 [2024-12-06 18:37:12.843505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90459 ] 00:38:42.235 [2024-12-06 18:37:13.028150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:42.235 [2024-12-06 18:37:13.154956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:43.172 Running I/O for 1 seconds... 00:38:44.116 27135.00 IOPS, 106.00 MiB/s 00:38:44.116 Latency(us) 00:38:44.116 [2024-12-06T18:37:15.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:44.116 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:38:44.116 raid5f : 1.01 27104.84 105.88 0.00 0.00 4707.51 1585.76 6553.60 00:38:44.116 [2024-12-06T18:37:15.065Z] =================================================================================================================== 00:38:44.116 [2024-12-06T18:37:15.065Z] Total : 27104.84 105.88 0.00 0.00 4707.51 1585.76 6553.60 00:38:45.492 00:38:45.492 real 0m3.549s 00:38:45.492 user 0m3.044s 00:38:45.492 sys 0m0.375s 00:38:45.492 18:37:16 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:45.492 18:37:16 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:38:45.492 ************************************ 00:38:45.492 END TEST bdev_write_zeroes 00:38:45.492 ************************************ 00:38:45.492 18:37:16 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:45.492 18:37:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:38:45.492 18:37:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:45.492 18:37:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:45.492 ************************************ 00:38:45.492 START TEST bdev_json_nonenclosed 00:38:45.492 ************************************ 00:38:45.492 18:37:16 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:45.751 [2024-12-06 18:37:16.469293] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:45.751 [2024-12-06 18:37:16.469432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90512 ] 00:38:45.751 [2024-12-06 18:37:16.653511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.009 [2024-12-06 18:37:16.785000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.009 [2024-12-06 18:37:16.785137] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:38:46.009 [2024-12-06 18:37:16.785188] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:38:46.009 [2024-12-06 18:37:16.785202] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:46.268 00:38:46.268 real 0m0.687s 00:38:46.268 user 0m0.418s 00:38:46.268 sys 0m0.164s 00:38:46.268 18:37:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:46.268 18:37:17 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:38:46.268 ************************************ 00:38:46.268 END TEST bdev_json_nonenclosed 00:38:46.268 ************************************ 00:38:46.268 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:46.268 18:37:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:38:46.268 18:37:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.268 18:37:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:46.268 ************************************ 00:38:46.268 START TEST bdev_json_nonarray 00:38:46.268 ************************************ 00:38:46.268 18:37:17 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:38:46.526 [2024-12-06 18:37:17.230713] Starting SPDK v25.01-pre git sha1 b6a18b192 / DPDK 24.03.0 initialization... 00:38:46.526 [2024-12-06 18:37:17.230887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90543 ] 00:38:46.526 [2024-12-06 18:37:17.414830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.785 [2024-12-06 18:37:17.542769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.785 [2024-12-06 18:37:17.542903] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:38:46.785 [2024-12-06 18:37:17.542929] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:38:46.785 [2024-12-06 18:37:17.542952] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:47.044 00:38:47.044 real 0m0.683s 00:38:47.044 user 0m0.408s 00:38:47.044 sys 0m0.170s 00:38:47.044 18:37:17 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:47.044 ************************************ 00:38:47.044 END TEST bdev_json_nonarray 00:38:47.044 ************************************ 00:38:47.044 18:37:17 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:38:47.044 18:37:17 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:38:47.044 00:38:47.044 real 0m50.865s 00:38:47.044 user 1m7.441s 00:38:47.044 sys 0m6.251s 00:38:47.044 18:37:17 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:47.044 18:37:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:38:47.044 ************************************ 00:38:47.044 END TEST blockdev_raid5f 00:38:47.044 ************************************ 00:38:47.044 18:37:17 -- spdk/autotest.sh@194 -- # uname -s 00:38:47.044 18:37:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:38:47.044 18:37:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:38:47.044 18:37:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:38:47.044 18:37:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:38:47.044 18:37:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:38:47.044 18:37:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:38:47.044 18:37:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.044 18:37:17 -- common/autotest_common.sh@10 -- # set +x 00:38:47.303 18:37:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:47.303 18:37:18 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:47.303 18:37:18 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:47.303 18:37:18 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:47.303 18:37:18 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:47.303 18:37:18 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:47.303 18:37:18 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:47.303 18:37:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:47.303 18:37:18 -- common/autotest_common.sh@10 -- # set +x 00:38:47.303 18:37:18 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:47.303 18:37:18 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:47.303 18:37:18 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:47.303 18:37:18 -- common/autotest_common.sh@10 -- # set +x 00:38:49.836 INFO: APP EXITING 00:38:49.836 INFO: killing all VMs 00:38:49.836 INFO: killing vhost app 00:38:49.836 INFO: EXIT DONE 00:38:50.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:50.094 Waiting for block devices as requested 00:38:50.094 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:38:50.353 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:38:51.290 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:38:51.290 Cleaning 00:38:51.290 Removing: /var/run/dpdk/spdk0/config 00:38:51.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:51.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:51.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:51.290 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:51.290 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:51.290 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:51.290 Removing: /dev/shm/spdk_tgt_trace.pid56631 00:38:51.290 Removing: /var/run/dpdk/spdk0 00:38:51.290 Removing: /var/run/dpdk/spdk_pid56391 00:38:51.290 Removing: /var/run/dpdk/spdk_pid56631 00:38:51.290 Removing: /var/run/dpdk/spdk_pid56866 00:38:51.290 Removing: /var/run/dpdk/spdk_pid56974 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57026 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57165 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57183 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57393 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57505 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57617 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57739 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57847 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57892 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57929 00:38:51.290 Removing: /var/run/dpdk/spdk_pid57999 00:38:51.290 Removing: /var/run/dpdk/spdk_pid58111 00:38:51.290 Removing: /var/run/dpdk/spdk_pid58558 00:38:51.290 Removing: /var/run/dpdk/spdk_pid58643 00:38:51.290 Removing: /var/run/dpdk/spdk_pid58718 00:38:51.290 Removing: /var/run/dpdk/spdk_pid58734 00:38:51.290 Removing: /var/run/dpdk/spdk_pid58891 00:38:51.290 Removing: /var/run/dpdk/spdk_pid58907 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59067 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59084 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59148 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59177 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59241 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59259 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59460 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59496 00:38:51.290 Removing: /var/run/dpdk/spdk_pid59580 00:38:51.290 Removing: /var/run/dpdk/spdk_pid60948 00:38:51.290 Removing: /var/run/dpdk/spdk_pid61154 00:38:51.290 Removing: /var/run/dpdk/spdk_pid61294 00:38:51.290 Removing: /var/run/dpdk/spdk_pid61943 00:38:51.290 Removing: /var/run/dpdk/spdk_pid62159 00:38:51.549 Removing: /var/run/dpdk/spdk_pid62300 00:38:51.549 Removing: /var/run/dpdk/spdk_pid62945 00:38:51.549 Removing: /var/run/dpdk/spdk_pid63275 00:38:51.549 Removing: /var/run/dpdk/spdk_pid63419 00:38:51.549 Removing: /var/run/dpdk/spdk_pid64804 00:38:51.549 Removing: /var/run/dpdk/spdk_pid65063 00:38:51.549 Removing: /var/run/dpdk/spdk_pid65203 00:38:51.549 Removing: /var/run/dpdk/spdk_pid66596 00:38:51.549 Removing: /var/run/dpdk/spdk_pid66849 00:38:51.549 Removing: /var/run/dpdk/spdk_pid66999 00:38:51.549 Removing: /var/run/dpdk/spdk_pid68374 00:38:51.549 Removing: /var/run/dpdk/spdk_pid68824 00:38:51.549 Removing: /var/run/dpdk/spdk_pid68971 00:38:51.549 Removing: /var/run/dpdk/spdk_pid70445 00:38:51.549 Removing: /var/run/dpdk/spdk_pid70710 00:38:51.549 Removing: /var/run/dpdk/spdk_pid70861 00:38:51.549 Removing: /var/run/dpdk/spdk_pid72348 00:38:51.549 Removing: /var/run/dpdk/spdk_pid72608 00:38:51.549 Removing: /var/run/dpdk/spdk_pid72758 00:38:51.549 Removing: /var/run/dpdk/spdk_pid74238 00:38:51.549 Removing: /var/run/dpdk/spdk_pid74726 00:38:51.549 Removing: /var/run/dpdk/spdk_pid74872 00:38:51.549 Removing: /var/run/dpdk/spdk_pid75020 00:38:51.549 Removing: /var/run/dpdk/spdk_pid75456 00:38:51.549 Removing: /var/run/dpdk/spdk_pid76197 00:38:51.549 Removing: /var/run/dpdk/spdk_pid76592 00:38:51.549 Removing: /var/run/dpdk/spdk_pid77276 00:38:51.549 Removing: /var/run/dpdk/spdk_pid77735 00:38:51.549 Removing: /var/run/dpdk/spdk_pid78500 00:38:51.549 Removing: /var/run/dpdk/spdk_pid78909 00:38:51.549 Removing: /var/run/dpdk/spdk_pid80870 00:38:51.549 Removing: /var/run/dpdk/spdk_pid81303 00:38:51.549 Removing: /var/run/dpdk/spdk_pid81743 00:38:51.549 Removing: /var/run/dpdk/spdk_pid83821 00:38:51.549 Removing: /var/run/dpdk/spdk_pid84317 00:38:51.549 Removing: /var/run/dpdk/spdk_pid84837 00:38:51.549 Removing: /var/run/dpdk/spdk_pid85895 00:38:51.549 Removing: /var/run/dpdk/spdk_pid86218 00:38:51.549 Removing: /var/run/dpdk/spdk_pid87158 00:38:51.549 Removing: /var/run/dpdk/spdk_pid87481 00:38:51.549 Removing: /var/run/dpdk/spdk_pid88426 00:38:51.549 Removing: /var/run/dpdk/spdk_pid88749 00:38:51.549 Removing: /var/run/dpdk/spdk_pid89425 00:38:51.549 Removing: /var/run/dpdk/spdk_pid89712 00:38:51.549 Removing: /var/run/dpdk/spdk_pid89779 00:38:51.549 Removing: /var/run/dpdk/spdk_pid89827 00:38:51.549 Removing: /var/run/dpdk/spdk_pid90078 00:38:51.549 Removing: /var/run/dpdk/spdk_pid90257 00:38:51.549 Removing: /var/run/dpdk/spdk_pid90355 00:38:51.549 Removing: /var/run/dpdk/spdk_pid90459 00:38:51.549 Removing: /var/run/dpdk/spdk_pid90512 00:38:51.549 Removing: /var/run/dpdk/spdk_pid90543 00:38:51.549 Clean 00:38:51.858 18:37:22 -- common/autotest_common.sh@1453 -- # return 0 00:38:51.858 18:37:22 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:51.858 18:37:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:51.858 18:37:22 -- common/autotest_common.sh@10 -- # set +x 00:38:51.858 18:37:22 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:51.858 18:37:22 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:51.858 18:37:22 -- common/autotest_common.sh@10 -- # set +x 00:38:51.858 18:37:22 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:38:51.858 18:37:22 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:38:51.859 18:37:22 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:38:51.859 18:37:22 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:51.859 18:37:22 -- spdk/autotest.sh@398 -- # hostname 00:38:51.859 18:37:22 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:38:52.137 geninfo: WARNING: invalid characters removed from testname! 00:39:18.681 18:37:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:18.681 18:37:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:20.057 18:37:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:21.965 18:37:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:23.890 18:37:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:26.423 18:37:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:39:28.326 18:37:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:28.326 18:37:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:28.326 18:37:58 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:39:28.326 18:37:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:28.326 18:37:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:28.326 18:37:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:39:28.326 + [[ -n 5214 ]] 00:39:28.326 + sudo kill 5214 00:39:28.335 [Pipeline] } 00:39:28.350 [Pipeline] // timeout 00:39:28.355 [Pipeline] } 00:39:28.366 [Pipeline] // stage 00:39:28.370 [Pipeline] } 00:39:28.378 [Pipeline] // catchError 00:39:28.384 [Pipeline] stage 00:39:28.386 [Pipeline] { (Stop VM) 00:39:28.394 [Pipeline] sh 00:39:28.672 + vagrant halt 00:39:31.201 ==> default: Halting domain... 00:39:37.830 [Pipeline] sh 00:39:38.114 + vagrant destroy -f 00:39:40.652 ==> default: Removing domain... 00:39:40.664 [Pipeline] sh 00:39:40.947 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:39:40.957 [Pipeline] } 00:39:40.974 [Pipeline] // stage 00:39:40.980 [Pipeline] } 00:39:40.995 [Pipeline] // dir 00:39:40.999 [Pipeline] } 00:39:41.015 [Pipeline] // wrap 00:39:41.020 [Pipeline] } 00:39:41.030 [Pipeline] // catchError 00:39:41.038 [Pipeline] stage 00:39:41.040 [Pipeline] { (Epilogue) 00:39:41.049 [Pipeline] sh 00:39:41.329 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:45.527 [Pipeline] catchError 00:39:45.528 [Pipeline] { 00:39:45.539 [Pipeline] sh 00:39:45.820 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:46.080 Artifacts sizes are good 00:39:46.088 [Pipeline] } 00:39:46.098 [Pipeline] // catchError 00:39:46.106 [Pipeline] archiveArtifacts 00:39:46.111 Archiving artifacts 00:39:46.205 [Pipeline] cleanWs 00:39:46.214 [WS-CLEANUP] Deleting project workspace... 00:39:46.214 [WS-CLEANUP] Deferred wipeout is used... 00:39:46.220 [WS-CLEANUP] done 00:39:46.222 [Pipeline] } 00:39:46.235 [Pipeline] // stage 00:39:46.238 [Pipeline] } 00:39:46.254 [Pipeline] // node 00:39:46.258 [Pipeline] End of Pipeline 00:39:46.335 Finished: SUCCESS